Test Report: Docker_Linux_containerd_arm64 21997

                    
                      ee66eb73e5650a3c34c21fac75605dac5b258565:2025-12-02:42611
                    
                

Test fail (34/417)

Order failed test Duration
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 507.59
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.26
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.22
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.3
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.31
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 736.62
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.34
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 2.28
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.06
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.35
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.45
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.09
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 112.1
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.1
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.41
358 TestKubernetesUpgrade 789.62
411 TestStartStop/group/no-preload/serial/FirstStart 512.69
437 TestStartStop/group/newest-cni/serial/FirstStart 507.06
438 TestStartStop/group/no-preload/serial/DeployApp 3.02
439 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 117.38
442 TestStartStop/group/no-preload/serial/SecondStart 370.24
444 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 117.03
447 TestStartStop/group/newest-cni/serial/SecondStart 373.54
448 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.33
452 TestStartStop/group/newest-cni/serial/Pause 9.5
467 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 255.07
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (507.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 21:02:36.513725  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:03:04.221077  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.126930  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.133332  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.144827  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.166298  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.207949  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.289492  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.451250  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.773066  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:45.415410  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:46.697342  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:49.260318  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:54.382320  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:05:04.624407  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:05:25.106043  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:06:06.067675  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:07:27.989693  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:07:36.513767  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m26.066342345s)

                                                
                                                
-- stdout --
	* [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Found network options:
	  - HTTP_PROXY=localhost:41313
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:41313 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000311038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 6 (342.311802ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:09:04.647822  307441 status.go:458] kubeconfig endpoint: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/2632412.pem                                                                                                       │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /usr/share/ca-certificates/2632412.pem                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/test/nested/copy/263241/hosts                                                                                               │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save kicbase/echo-server:functional-446665 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image rm kicbase/echo-server:functional-446665 --alsologtostderr                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format yaml --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format short --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format json --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format table --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh pgrep buildkitd                                                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image          │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                          │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete         │ -p functional-446665                                                                                                                                            │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start          │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:00:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:00:38.271371  301428 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:00:38.271474  301428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:38.271478  301428 out.go:374] Setting ErrFile to fd 2...
	I1202 21:00:38.271482  301428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:38.271751  301428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:00:38.272150  301428 out.go:368] Setting JSON to false
	I1202 21:00:38.272927  301428 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9777,"bootTime":1764699462,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:00:38.272984  301428 start.go:143] virtualization:  
	I1202 21:00:38.277384  301428 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:00:38.282008  301428 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:00:38.282136  301428 notify.go:221] Checking for updates...
	I1202 21:00:38.288781  301428 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:00:38.293076  301428 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:00:38.296128  301428 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:00:38.299257  301428 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:00:38.302323  301428 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:00:38.305550  301428 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:00:38.339228  301428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:00:38.339333  301428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:00:38.395896  301428 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:00:38.387077419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:00:38.395991  301428 docker.go:319] overlay module found
	I1202 21:00:38.399209  301428 out.go:179] * Using the docker driver based on user configuration
	I1202 21:00:38.402137  301428 start.go:309] selected driver: docker
	I1202 21:00:38.402149  301428 start.go:927] validating driver "docker" against <nil>
	I1202 21:00:38.402161  301428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:00:38.402895  301428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:00:38.457208  301428 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:00:38.448783961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:00:38.457360  301428 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 21:00:38.457571  301428 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:00:38.460712  301428 out.go:179] * Using Docker driver with root privileges
	I1202 21:00:38.463650  301428 cni.go:84] Creating CNI manager for ""
	I1202 21:00:38.463707  301428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:00:38.463714  301428 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 21:00:38.463789  301428 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:00:38.466962  301428 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:00:38.469854  301428 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:00:38.472821  301428 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:00:38.475713  301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:00:38.475786  301428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:00:38.499704  301428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:00:38.499715  301428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:00:38.537639  301428 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:00:38.725625  301428 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:00:38.725816  301428 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.725905  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:00:38.725914  301428 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 111.627µs
	I1202 21:00:38.725927  301428 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:00:38.725937  301428 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.725965  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:00:38.725969  301428 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.616µs
	I1202 21:00:38.725974  301428 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:00:38.725982  301428 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.725998  301428 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:00:38.726006  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:00:38.726010  301428 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 29.472µs
	I1202 21:00:38.726016  301428 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:00:38.726023  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json: {Name:mke67144143d67dc20fbcc161445c3218cd8c3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:38.726025  301428 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726054  301428 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726129  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:00:38.726136  301428 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 83.485µs
	I1202 21:00:38.726141  301428 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:00:38.726150  301428 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726176  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:00:38.726180  301428 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.883µs
	I1202 21:00:38.726184  301428 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:00:38.726193  301428 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:00:38.726193  301428 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726217  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:00:38.726217  301428 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726221  301428 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.915µs
	I1202 21:00:38.726226  301428 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:00:38.726234  301428 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:00:38.726259  301428 start.go:364] duration metric: took 34.223µs to acquireMachinesLock for "functional-753958"
	I1202 21:00:38.726264  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:00:38.726268  301428 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 35.404µs
	I1202 21:00:38.726272  301428 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:00:38.726286  301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:00:38.726290  301428 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 266.249µs
	I1202 21:00:38.726300  301428 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:00:38.726307  301428 cache.go:87] Successfully saved all images to host disk.
	I1202 21:00:38.726276  301428 start.go:93] Provisioning new machine with config: &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:00:38.726332  301428 start.go:125] createHost starting for "" (driver="docker")
	I1202 21:00:38.729746  301428 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1202 21:00:38.729991  301428 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41313 to docker env.
	I1202 21:00:38.730027  301428 start.go:159] libmachine.API.Create for "functional-753958" (driver="docker")
	I1202 21:00:38.730054  301428 client.go:173] LocalClient.Create starting
	I1202 21:00:38.730113  301428 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 21:00:38.730143  301428 main.go:143] libmachine: Decoding PEM data...
	I1202 21:00:38.730161  301428 main.go:143] libmachine: Parsing certificate...
	I1202 21:00:38.730217  301428 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 21:00:38.730232  301428 main.go:143] libmachine: Decoding PEM data...
	I1202 21:00:38.730244  301428 main.go:143] libmachine: Parsing certificate...
	I1202 21:00:38.730593  301428 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 21:00:38.746644  301428 cli_runner.go:211] docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 21:00:38.746724  301428 network_create.go:284] running [docker network inspect functional-753958] to gather additional debugging logs...
	I1202 21:00:38.746740  301428 cli_runner.go:164] Run: docker network inspect functional-753958
	W1202 21:00:38.762057  301428 cli_runner.go:211] docker network inspect functional-753958 returned with exit code 1
	I1202 21:00:38.762078  301428 network_create.go:287] error running [docker network inspect functional-753958]: docker network inspect functional-753958: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-753958 not found
	I1202 21:00:38.762113  301428 network_create.go:289] output of [docker network inspect functional-753958]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-753958 not found
	
	** /stderr **
	I1202 21:00:38.762213  301428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:00:38.778018  301428 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198b550}
	I1202 21:00:38.778047  301428 network_create.go:124] attempt to create docker network functional-753958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 21:00:38.778100  301428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-753958 functional-753958
	I1202 21:00:38.832894  301428 network_create.go:108] docker network functional-753958 192.168.49.0/24 created
	I1202 21:00:38.832917  301428 kic.go:121] calculated static IP "192.168.49.2" for the "functional-753958" container
	I1202 21:00:38.833004  301428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 21:00:38.848486  301428 cli_runner.go:164] Run: docker volume create functional-753958 --label name.minikube.sigs.k8s.io=functional-753958 --label created_by.minikube.sigs.k8s.io=true
	I1202 21:00:38.865344  301428 oci.go:103] Successfully created a docker volume functional-753958
	I1202 21:00:38.865419  301428 cli_runner.go:164] Run: docker run --rm --name functional-753958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753958 --entrypoint /usr/bin/test -v functional-753958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 21:00:39.394730  301428 oci.go:107] Successfully prepared a docker volume functional-753958
	I1202 21:00:39.394796  301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 21:00:39.394933  301428 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 21:00:39.395032  301428 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 21:00:39.456690  301428 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-753958 --name functional-753958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-753958 --network functional-753958 --ip 192.168.49.2 --volume functional-753958:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 21:00:39.750661  301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Running}}
	I1202 21:00:39.773106  301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:00:39.798111  301428 cli_runner.go:164] Run: docker exec functional-753958 stat /var/lib/dpkg/alternatives/iptables
	I1202 21:00:39.844919  301428 oci.go:144] the created container "functional-753958" has a running status.
	I1202 21:00:39.844938  301428 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa...
	I1202 21:00:40.139684  301428 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 21:00:40.178510  301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:00:40.199530  301428 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 21:00:40.199542  301428 kic_runner.go:114] Args: [docker exec --privileged functional-753958 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 21:00:40.262646  301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:00:40.290146  301428 machine.go:94] provisionDockerMachine start ...
	I1202 21:00:40.290285  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:40.311397  301428 main.go:143] libmachine: Using SSH client type: native
	I1202 21:00:40.311714  301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:00:40.311720  301428 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:00:40.312365  301428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41436->127.0.0.1:33108: read: connection reset by peer
	I1202 21:00:43.461414  301428 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:00:43.461428  301428 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:00:43.461548  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:43.479235  301428 main.go:143] libmachine: Using SSH client type: native
	I1202 21:00:43.479540  301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:00:43.479548  301428 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:00:43.638625  301428 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:00:43.638704  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:43.656435  301428 main.go:143] libmachine: Using SSH client type: native
	I1202 21:00:43.656742  301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:00:43.656755  301428 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:00:43.806257  301428 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:00:43.806274  301428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:00:43.806292  301428 ubuntu.go:190] setting up certificates
	I1202 21:00:43.806300  301428 provision.go:84] configureAuth start
	I1202 21:00:43.806375  301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:00:43.825519  301428 provision.go:143] copyHostCerts
	I1202 21:00:43.825579  301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:00:43.825587  301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:00:43.825705  301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:00:43.825795  301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:00:43.825804  301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:00:43.825831  301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:00:43.825880  301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:00:43.825889  301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:00:43.825912  301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:00:43.825954  301428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:00:44.143402  301428 provision.go:177] copyRemoteCerts
	I1202 21:00:44.143482  301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:00:44.143522  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:44.161759  301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:00:44.265307  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:00:44.282968  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:00:44.300706  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:00:44.318213  301428 provision.go:87] duration metric: took 511.877903ms to configureAuth
	I1202 21:00:44.318230  301428 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:00:44.318415  301428 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:00:44.318421  301428 machine.go:97] duration metric: took 4.0282656s to provisionDockerMachine
	I1202 21:00:44.318426  301428 client.go:176] duration metric: took 5.588368064s to LocalClient.Create
	I1202 21:00:44.318451  301428 start.go:167] duration metric: took 5.588424997s to libmachine.API.Create "functional-753958"
	I1202 21:00:44.318458  301428 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:00:44.318467  301428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:00:44.318520  301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:00:44.318564  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:44.338209  301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:00:44.441688  301428 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:00:44.445112  301428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:00:44.445129  301428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:00:44.445144  301428 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:00:44.445196  301428 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:00:44.445284  301428 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:00:44.445357  301428 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:00:44.445404  301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:00:44.452966  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:00:44.470133  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:00:44.487877  301428 start.go:296] duration metric: took 169.40589ms for postStartSetup
	I1202 21:00:44.488249  301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:00:44.510451  301428 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:00:44.510747  301428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:00:44.510793  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:44.532211  301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:00:44.634907  301428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:00:44.639662  301428 start.go:128] duration metric: took 5.913317726s to createHost
	I1202 21:00:44.639677  301428 start.go:83] releasing machines lock for "functional-753958", held for 5.913412049s
	I1202 21:00:44.639746  301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:00:44.660700  301428 out.go:179] * Found network options:
	I1202 21:00:44.663601  301428 out.go:179]   - HTTP_PROXY=localhost:41313
	W1202 21:00:44.666529  301428 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1202 21:00:44.669529  301428 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1202 21:00:44.672482  301428 ssh_runner.go:195] Run: cat /version.json
	I1202 21:00:44.672533  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:44.672555  301428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:00:44.672604  301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:00:44.695363  301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:00:44.695936  301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:00:44.797239  301428 ssh_runner.go:195] Run: systemctl --version
	I1202 21:00:44.894676  301428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:00:44.899211  301428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:00:44.899272  301428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:00:44.926676  301428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 21:00:44.926689  301428 start.go:496] detecting cgroup driver to use...
	I1202 21:00:44.926721  301428 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:00:44.926769  301428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:00:44.941910  301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:00:44.955039  301428 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:00:44.955093  301428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:00:44.973112  301428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:00:44.992757  301428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:00:45.261542  301428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:00:45.418625  301428 docker.go:234] disabling docker service ...
	I1202 21:00:45.418679  301428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:00:45.441648  301428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:00:45.455862  301428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:00:45.580712  301428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:00:45.703515  301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:00:45.716886  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:00:45.731348  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:00:45.740544  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:00:45.749590  301428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:00:45.749670  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:00:45.759826  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:00:45.768600  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:00:45.776994  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:00:45.785549  301428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:00:45.793578  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:00:45.802662  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:00:45.810974  301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:00:45.819831  301428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:00:45.827472  301428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:00:45.834803  301428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:00:45.961142  301428 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:00:46.057083  301428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:00:46.057145  301428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:00:46.061374  301428 start.go:564] Will wait 60s for crictl version
	I1202 21:00:46.061431  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.065495  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:00:46.090836  301428 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:00:46.090896  301428 ssh_runner.go:195] Run: containerd --version
	I1202 21:00:46.112045  301428 ssh_runner.go:195] Run: containerd --version
	I1202 21:00:46.136088  301428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:00:46.139103  301428 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:00:46.156482  301428 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:00:46.160632  301428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:00:46.170869  301428 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:00:46.170972  301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:00:46.171034  301428 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:00:46.195477  301428 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 21:00:46.195492  301428 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 21:00:46.195557  301428 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:46.195788  301428 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.195888  301428 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.195984  301428 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.196087  301428 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.196248  301428 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 21:00:46.196348  301428 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.196435  301428 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.198187  301428 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.198569  301428 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.198725  301428 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 21:00:46.198913  301428 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.199046  301428 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:46.199266  301428 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.199425  301428 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.199558  301428 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.552326  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 21:00:46.552394  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 21:00:46.555504  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 21:00:46.555596  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.567685  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 21:00:46.567780  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.574356  301428 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 21:00:46.574392  301428 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 21:00:46.574438  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.587647  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 21:00:46.587708  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.601054  301428 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 21:00:46.601084  301428 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.601132  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.601180  301428 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 21:00:46.601191  301428 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.601212  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.601261  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:00:46.618715  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 21:00:46.618774  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.631791  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 21:00:46.631849  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.632282  301428 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 21:00:46.632309  301428 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.632362  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.632901  301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 21:00:46.632940  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.643461  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:00:46.643526  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.643575  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.683532  301428 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 21:00:46.683567  301428 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.683614  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.683738  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.683775  301428 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 21:00:46.683788  301428 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.683807  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.683844  301428 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 21:00:46.683854  301428 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.683872  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:46.724974  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 21:00:46.732713  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.762742  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.779604  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.779638  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.779707  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.779711  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.779775  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 21:00:46.779870  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 21:00:46.797714  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 21:00:46.873742  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 21:00:46.894215  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 21:00:46.894243  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 21:00:46.894325  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 21:00:46.894378  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.894424  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.894469  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.895253  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 21:00:46.895327  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:00:46.981211  301428 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 21:00:46.981289  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1202 21:00:46.983853  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 21:00:46.983943  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:00:46.984004  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 21:00:46.984065  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 21:00:46.984109  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 21:00:46.984141  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 21:00:46.984183  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:00:46.984221  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 21:00:46.984233  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 21:00:47.209535  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 21:00:47.209567  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 21:00:47.209623  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 21:00:47.209686  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 21:00:47.209769  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:00:47.209829  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 21:00:47.209876  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:00:47.209921  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 21:00:47.209959  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:00:47.210003  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 21:00:47.210011  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 21:00:47.239682  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 21:00:47.239708  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 21:00:47.239764  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 21:00:47.239772  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 21:00:47.239810  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 21:00:47.239818  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	W1202 21:00:47.435553  301428 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 21:00:47.435686  301428 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 21:00:47.435743  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:47.562322  301428 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 21:00:47.562369  301428 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:47.562427  301428 ssh_runner.go:195] Run: which crictl
	I1202 21:00:47.595335  301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:00:47.595390  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 21:00:47.644679  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:48.887307  301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.291893114s)
	I1202 21:00:48.887316  301428 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.242617273s)
	I1202 21:00:48.887324  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 21:00:48.887342  301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:00:48.887392  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 21:00:48.887393  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:48.916290  301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:00:49.909389  301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.02195967s)
	I1202 21:00:49.909406  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 21:00:49.909422  301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:00:49.909467  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 21:00:49.909560  301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 21:00:49.909623  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:00:50.834954  301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 21:00:50.834981  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 21:00:50.835055  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 21:00:50.835073  301428 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:00:50.835114  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 21:00:51.829464  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 21:00:51.829496  301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:00:51.829548  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 21:00:52.776173  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 21:00:52.776195  301428 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:00:52.776247  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 21:00:54.138711  301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.362441436s)
	I1202 21:00:54.138738  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 21:00:54.138761  301428 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:00:54.138815  301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 21:00:54.528051  301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 21:00:54.528076  301428 cache_images.go:125] Successfully loaded all cached images
	I1202 21:00:54.528080  301428 cache_images.go:94] duration metric: took 8.332576404s to LoadCachedImages
	I1202 21:00:54.528091  301428 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:00:54.528184  301428 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:00:54.528249  301428 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:00:54.554040  301428 cni.go:84] Creating CNI manager for ""
	I1202 21:00:54.554050  301428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:00:54.554065  301428 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:00:54.554086  301428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:00:54.554193  301428 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:00:54.554262  301428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:00:54.562816  301428 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 21:00:54.562870  301428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:00:54.570767  301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 21:00:54.570857  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 21:00:54.570930  301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 21:00:54.570955  301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:00:54.571028  301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 21:00:54.571071  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 21:00:54.587665  301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 21:00:54.587693  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 21:00:54.587749  301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 21:00:54.587758  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 21:00:54.587850  301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 21:00:54.614261  301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 21:00:54.614296  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 21:00:55.432370  301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:00:55.440612  301428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:00:55.452982  301428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:00:55.465781  301428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:00:55.479183  301428 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:00:55.482603  301428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 21:00:55.492672  301428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:00:55.613883  301428 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:00:55.629780  301428 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:00:55.629791  301428 certs.go:195] generating shared ca certs ...
	I1202 21:00:55.629805  301428 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:55.629970  301428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:00:55.630023  301428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:00:55.630030  301428 certs.go:257] generating profile certs ...
	I1202 21:00:55.630085  301428 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:00:55.630095  301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt with IP's: []
	I1202 21:00:55.700640  301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt ...
	I1202 21:00:55.700656  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: {Name:mk95c43ebb16136159ec2ca7da9d4919573669c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:55.700856  301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key ...
	I1202 21:00:55.700864  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key: {Name:mk35ad2659bb9cd65ddd7dddae15269e3dd02152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:55.700958  301428 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:00:55.700973  301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 21:00:56.113158  301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 ...
	I1202 21:00:56.113179  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35: {Name:mke2de1b4c5a9107874bf83633e5b7fb4d15f840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:56.113398  301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35 ...
	I1202 21:00:56.113406  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35: {Name:mkd207616980fb220b881bbc5091a8d96c41a549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:56.113491  301428 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt
	I1202 21:00:56.113569  301428 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key
	I1202 21:00:56.113620  301428 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:00:56.113632  301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt with IP's: []
	I1202 21:00:56.311583  301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt ...
	I1202 21:00:56.311598  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt: {Name:mkbdd0c576e3f5093bb327c407dad458e38a479a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:56.311791  301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key ...
	I1202 21:00:56.311799  301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key: {Name:mk94b055c24f3389515b29eae7f9843af752bdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:00:56.311986  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:00:56.312027  301428 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:00:56.312035  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:00:56.312059  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:00:56.312081  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:00:56.312106  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:00:56.312149  301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:00:56.312720  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:00:56.330885  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:00:56.348969  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:00:56.366410  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:00:56.383949  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:00:56.401221  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:00:56.418985  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:00:56.436965  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:00:56.454317  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:00:56.471415  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:00:56.489446  301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:00:56.507201  301428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:00:56.520253  301428 ssh_runner.go:195] Run: openssl version
	I1202 21:00:56.526341  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:00:56.534702  301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:00:56.538480  301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:00:56.538551  301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:00:56.580527  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:00:56.589528  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:00:56.599495  301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:00:56.603317  301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:00:56.603380  301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:00:56.644499  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:00:56.652999  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:00:56.661412  301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:00:56.665349  301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:00:56.665408  301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:00:56.706618  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:00:56.715533  301428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:00:56.719338  301428 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 21:00:56.719380  301428 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:00:56.719447  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:00:56.719513  301428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:00:56.744760  301428 cri.go:89] found id: ""
	I1202 21:00:56.744830  301428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:00:56.752551  301428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:00:56.760232  301428 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:00:56.760290  301428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:00:56.768056  301428 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:00:56.768083  301428 kubeadm.go:158] found existing configuration files:
	
	I1202 21:00:56.768147  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:00:56.775823  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:00:56.775878  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:00:56.783312  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:00:56.790957  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:00:56.791025  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:00:56.798645  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:00:56.806311  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:00:56.806367  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:00:56.813750  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:00:56.821398  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:00:56.821456  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:00:56.828792  301428 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:00:56.865461  301428 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:00:56.865683  301428 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:00:56.938115  301428 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:00:56.938192  301428 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:00:56.938234  301428 kubeadm.go:319] OS: Linux
	I1202 21:00:56.938277  301428 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:00:56.938324  301428 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:00:56.938382  301428 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:00:56.938445  301428 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:00:56.938493  301428 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:00:56.938553  301428 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:00:56.938606  301428 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:00:56.938653  301428 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:00:56.938706  301428 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:00:57.013820  301428 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:00:57.013922  301428 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:00:57.014011  301428 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:00:57.020254  301428 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:00:57.029064  301428 out.go:252]   - Generating certificates and keys ...
	I1202 21:00:57.029166  301428 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:00:57.029244  301428 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:00:57.490567  301428 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 21:00:57.584542  301428 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 21:00:57.958852  301428 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 21:00:58.126107  301428 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 21:00:58.471456  301428 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 21:00:58.471747  301428 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:00:58.816334  301428 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 21:00:58.816860  301428 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 21:00:58.937480  301428 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 21:00:59.153716  301428 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 21:00:59.313929  301428 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 21:00:59.314156  301428 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:00:59.433265  301428 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:00:59.938394  301428 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:01:00.730816  301428 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:01:01.099097  301428 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:01:01.378487  301428 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:01:01.379048  301428 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:01:01.392592  301428 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:01:01.454653  301428 out.go:252]   - Booting up control plane ...
	I1202 21:01:01.454771  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:01:01.454946  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:01:01.455020  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:01:01.455122  301428 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:01:01.455214  301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:01:01.455324  301428 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:01:01.455406  301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:01:01.455444  301428 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:01:01.613618  301428 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:01:01.613759  301428 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:05:01.613870  301428 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000311038s
	I1202 21:05:01.613902  301428 kubeadm.go:319] 
	I1202 21:05:01.613956  301428 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:05:01.613986  301428 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:05:01.614089  301428 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:05:01.614095  301428 kubeadm.go:319] 
	I1202 21:05:01.614204  301428 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:05:01.614239  301428 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:05:01.614267  301428 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:05:01.614270  301428 kubeadm.go:319] 
	I1202 21:05:01.617971  301428 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:05:01.618435  301428 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:05:01.618542  301428 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:05:01.618769  301428 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:05:01.618774  301428 kubeadm.go:319] 
	W1202 21:05:01.619015  301428 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000311038s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:05:01.619132  301428 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:05:01.619527  301428 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:05:02.033955  301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:05:02.050302  301428 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:05:02.050374  301428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:05:02.062576  301428 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:05:02.062591  301428 kubeadm.go:158] found existing configuration files:
	
	I1202 21:05:02.062649  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:05:02.071512  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:05:02.071570  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:05:02.079207  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:05:02.089206  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:05:02.089289  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:05:02.100648  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:05:02.109235  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:05:02.109293  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:05:02.116935  301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:05:02.126733  301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:05:02.126881  301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:05:02.137914  301428 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:05:02.181758  301428 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:05:02.181873  301428 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:05:02.258861  301428 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:05:02.258951  301428 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:05:02.258993  301428 kubeadm.go:319] OS: Linux
	I1202 21:05:02.259037  301428 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:05:02.259085  301428 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:05:02.259131  301428 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:05:02.259178  301428 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:05:02.259225  301428 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:05:02.259273  301428 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:05:02.259320  301428 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:05:02.259367  301428 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:05:02.259413  301428 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:05:02.333801  301428 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:05:02.333916  301428 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:05:02.334023  301428 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:05:02.346082  301428 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:05:02.351335  301428 out.go:252]   - Generating certificates and keys ...
	I1202 21:05:02.351442  301428 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:05:02.351505  301428 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:05:02.351585  301428 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:05:02.351645  301428 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:05:02.351715  301428 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:05:02.351768  301428 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:05:02.351829  301428 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:05:02.351900  301428 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:05:02.351974  301428 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:05:02.352053  301428 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:05:02.352101  301428 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:05:02.352156  301428 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:05:02.442806  301428 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:05:02.899145  301428 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:05:03.106047  301428 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:05:03.498121  301428 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:05:03.690328  301428 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:05:03.691070  301428 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:05:03.693619  301428 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:05:03.696683  301428 out.go:252]   - Booting up control plane ...
	I1202 21:05:03.696778  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:05:03.696851  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:05:03.697156  301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:05:03.717699  301428 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:05:03.717975  301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:05:03.725544  301428 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:05:03.727090  301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:05:03.727136  301428 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:05:03.867497  301428 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:05:03.867603  301428 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:09:03.867207  301428 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000050144s
	I1202 21:09:03.867233  301428 kubeadm.go:319] 
	I1202 21:09:03.867302  301428 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:09:03.867334  301428 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:09:03.867456  301428 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:09:03.867484  301428 kubeadm.go:319] 
	I1202 21:09:03.867622  301428 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:09:03.867665  301428 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:09:03.867706  301428 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:09:03.867710  301428 kubeadm.go:319] 
	I1202 21:09:03.872601  301428 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:09:03.873011  301428 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:09:03.873114  301428 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:09:03.873366  301428 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:09:03.873370  301428 kubeadm.go:319] 
	I1202 21:09:03.873434  301428 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:09:03.873486  301428 kubeadm.go:403] duration metric: took 8m7.154111325s to StartCluster
	I1202 21:09:03.873516  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:09:03.873575  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:09:03.898971  301428 cri.go:89] found id: ""
	I1202 21:09:03.898986  301428 logs.go:282] 0 containers: []
	W1202 21:09:03.898993  301428 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:09:03.898999  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:09:03.899062  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:09:03.923416  301428 cri.go:89] found id: ""
	I1202 21:09:03.923429  301428 logs.go:282] 0 containers: []
	W1202 21:09:03.923436  301428 logs.go:284] No container was found matching "etcd"
	I1202 21:09:03.923442  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:09:03.923504  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:09:03.950688  301428 cri.go:89] found id: ""
	I1202 21:09:03.950701  301428 logs.go:282] 0 containers: []
	W1202 21:09:03.950708  301428 logs.go:284] No container was found matching "coredns"
	I1202 21:09:03.950713  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:09:03.950769  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:09:03.974669  301428 cri.go:89] found id: ""
	I1202 21:09:03.974682  301428 logs.go:282] 0 containers: []
	W1202 21:09:03.974689  301428 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:09:03.974694  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:09:03.974752  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:09:04.001357  301428 cri.go:89] found id: ""
	I1202 21:09:04.001369  301428 logs.go:282] 0 containers: []
	W1202 21:09:04.001377  301428 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:09:04.001382  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:09:04.001440  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:09:04.032461  301428 cri.go:89] found id: ""
	I1202 21:09:04.032475  301428 logs.go:282] 0 containers: []
	W1202 21:09:04.032482  301428 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:09:04.032488  301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:09:04.032548  301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:09:04.057327  301428 cri.go:89] found id: ""
	I1202 21:09:04.057342  301428 logs.go:282] 0 containers: []
	W1202 21:09:04.057351  301428 logs.go:284] No container was found matching "kindnet"
	I1202 21:09:04.057363  301428 logs.go:123] Gathering logs for kubelet ...
	I1202 21:09:04.057374  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:09:04.113692  301428 logs.go:123] Gathering logs for dmesg ...
	I1202 21:09:04.113711  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:09:04.130702  301428 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:09:04.130719  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:09:04.194124  301428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:09:04.185822    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.186468    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.188330    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.188878    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.190600    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:09:04.185822    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.186468    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.188330    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.188878    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:04.190600    5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:09:04.194135  301428 logs.go:123] Gathering logs for containerd ...
	I1202 21:09:04.194146  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:09:04.235613  301428 logs.go:123] Gathering logs for container status ...
	I1202 21:09:04.235631  301428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:09:04.265763  301428 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:09:04.265805  301428 out.go:285] * 
	W1202 21:09:04.265903  301428 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:09:04.265959  301428 out.go:285] * 
	W1202 21:09:04.268308  301428 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:09:04.273804  301428 out.go:203] 
	W1202 21:09:04.276846  301428 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000050144s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:09:04.276896  301428 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:09:04.276915  301428 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:09:04.280035  301428 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:00:48 functional-753958 containerd[764]: time="2025-12-02T21:00:48.897231078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.901335518Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.903567046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.916377354Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.919108962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.824755953Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.826934747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.834959114Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.836081183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.819127594Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.821512398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.829079013Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.830189890Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.765584481Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.768055985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.775070804Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.775767392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.130296524Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.133244134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.143819863Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.144469166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.517690981Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.520005025Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.526740402Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.527072085Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:09:05.310941    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:05.311843    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:05.312869    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:05.314693    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:09:05.315282    5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:09:05 up  2:51,  0 user,  load average: 0.43, 0.58, 1.19
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 02 21:09:02 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:02 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:02 functional-753958 kubelet[5299]: E1202 21:09:02.842421    5299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 21:09:03 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:03 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:03 functional-753958 kubelet[5305]: E1202 21:09:03.585220    5305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 21:09:04 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:04 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:04 functional-753958 kubelet[5393]: E1202 21:09:04.377277    5393 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 21:09:05 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:05 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:09:05 functional-753958 kubelet[5442]: E1202 21:09:05.102549    5442 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 6 (353.403787ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:09:05.799610  307652 status.go:458] kubeconfig endpoint: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (507.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 21:09:05.816864  263241 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --alsologtostderr -v=8
E1202 21:09:44.122778  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:10:11.831712  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:12:36.514069  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:13:59.583008  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:14:44.122920  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --alsologtostderr -v=8: exit status 80 (6m5.501983693s)

                                                
                                                
-- stdout --
	* [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:09:05.869127  307731 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:09:05.869342  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.869372  307731 out.go:374] Setting ErrFile to fd 2...
	I1202 21:09:05.869392  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.870120  307731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:09:05.870642  307731 out.go:368] Setting JSON to false
	I1202 21:09:05.871532  307731 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10284,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:09:05.871698  307731 start.go:143] virtualization:  
	I1202 21:09:05.875240  307731 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:09:05.878196  307731 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:09:05.878269  307731 notify.go:221] Checking for updates...
	I1202 21:09:05.884072  307731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:09:05.886942  307731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:05.889899  307731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:09:05.892813  307731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:09:05.895771  307731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:09:05.899217  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:05.899365  307731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:09:05.932799  307731 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:09:05.932919  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:05.993966  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:05.984741651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:05.994072  307731 docker.go:319] overlay module found
	I1202 21:09:05.997248  307731 out.go:179] * Using the docker driver based on existing profile
	I1202 21:09:06.000038  307731 start.go:309] selected driver: docker
	I1202 21:09:06.000060  307731 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.000154  307731 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:09:06.000264  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:06.066709  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:06.057768194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:06.067144  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:06.067209  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:06.067263  307731 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.070421  307731 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:09:06.073261  307731 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:09:06.078117  307731 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:09:06.080953  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:06.081041  307731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:09:06.101516  307731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:09:06.101541  307731 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:09:06.138751  307731 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:09:06.314468  307731 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:09:06.314628  307731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:09:06.314753  307731 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314852  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:09:06.314868  307731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.02µs
	I1202 21:09:06.314884  307731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:09:06.314900  307731 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314935  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:09:06.314945  307731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.735µs
	I1202 21:09:06.314952  307731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:09:06.314968  307731 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315000  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:09:06.315009  307731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.764µs
	I1202 21:09:06.315016  307731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315030  307731 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315059  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:09:06.315069  307731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.875µs
	I1202 21:09:06.315075  307731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315089  307731 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315119  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:09:06.315127  307731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.629µs
	I1202 21:09:06.315144  307731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315143  307731 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:09:06.315177  307731 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315202  307731 start.go:364] duration metric: took 13.3µs to acquireMachinesLock for "functional-753958"
	I1202 21:09:06.315219  307731 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:09:06.315230  307731 fix.go:54] fixHost starting: 
	I1202 21:09:06.315183  307731 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315307  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:09:06.315332  307731 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 153.571µs
	I1202 21:09:06.315357  307731 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:09:06.315387  307731 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315443  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:09:06.315465  307731 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.424µs
	I1202 21:09:06.315488  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:06.315527  307731 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315588  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:09:06.315619  307731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 95.488µs
	I1202 21:09:06.315640  307731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:09:06.315489  307731 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:09:06.315801  307731 cache.go:87] Successfully saved all images to host disk.
	I1202 21:09:06.333736  307731 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:09:06.333771  307731 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:09:06.337175  307731 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:09:06.337206  307731 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:06.337301  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.354474  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.354810  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.354830  307731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:06.501197  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.501220  307731 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:09:06.501288  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.519375  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.519710  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.519727  307731 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:09:06.687724  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.687814  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.707419  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.707758  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.707780  307731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:06.858340  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:06.858365  307731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:09:06.858387  307731 ubuntu.go:190] setting up certificates
	I1202 21:09:06.858407  307731 provision.go:84] configureAuth start
	I1202 21:09:06.858472  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:06.877925  307731 provision.go:143] copyHostCerts
	I1202 21:09:06.877980  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878020  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:09:06.878036  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878121  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:09:06.878219  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878244  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:09:06.878253  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878283  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:09:06.878341  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878361  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:09:06.878366  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878392  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:09:06.878454  307731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:09:07.212788  307731 provision.go:177] copyRemoteCerts
	I1202 21:09:07.212871  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:07.212914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.229990  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.334622  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:09:07.334690  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:09:07.358156  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:09:07.358212  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:07.374829  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:09:07.374936  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:09:07.391856  307731 provision.go:87] duration metric: took 533.420534ms to configureAuth
	I1202 21:09:07.391883  307731 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:07.392075  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:07.392088  307731 machine.go:97] duration metric: took 1.054874904s to provisionDockerMachine
	I1202 21:09:07.392096  307731 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:09:07.392108  307731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:07.392158  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:07.392201  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.409892  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.513929  307731 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:07.517313  307731 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:09:07.517377  307731 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:09:07.517399  307731 command_runner.go:130] > VERSION_ID="12"
	I1202 21:09:07.517411  307731 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:09:07.517423  307731 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:09:07.517428  307731 command_runner.go:130] > ID=debian
	I1202 21:09:07.517432  307731 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:09:07.517437  307731 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:09:07.517460  307731 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:09:07.517505  307731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:07.517555  307731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:07.517574  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:09:07.517638  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:09:07.517741  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:09:07.517755  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /etc/ssl/certs/2632412.pem
	I1202 21:09:07.517830  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:09:07.517839  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> /etc/test/nested/copy/263241/hosts
	I1202 21:09:07.517882  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:09:07.525639  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:07.543648  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:09:07.560944  307731 start.go:296] duration metric: took 168.831988ms for postStartSetup
	I1202 21:09:07.561067  307731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:07.561116  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.579622  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.682695  307731 command_runner.go:130] > 12%
	I1202 21:09:07.682778  307731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:07.687210  307731 command_runner.go:130] > 172G
	I1202 21:09:07.687707  307731 fix.go:56] duration metric: took 1.372471826s for fixHost
	I1202 21:09:07.687729  307731 start.go:83] releasing machines lock for "functional-753958", held for 1.372515567s
	I1202 21:09:07.687799  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:07.704780  307731 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:07.704833  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.704860  307731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:07.704931  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.726613  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.737148  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.829144  307731 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:09:07.829307  307731 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:07.919742  307731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:09:07.919788  307731 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:09:07.919811  307731 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:09:07.919883  307731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:09:07.924332  307731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:09:07.924495  307731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:07.924590  307731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:07.932451  307731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:09:07.932475  307731 start.go:496] detecting cgroup driver to use...
	I1202 21:09:07.932505  307731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:07.932553  307731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:09:07.947902  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:09:07.964330  307731 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:07.964400  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:07.980760  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:07.995134  307731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:08.122567  307731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:08.232585  307731 docker.go:234] disabling docker service ...
	I1202 21:09:08.232660  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:08.247806  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:08.260075  307731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:08.380227  307731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:08.498586  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:08.511975  307731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:08.525630  307731 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1202 21:09:08.525792  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:09:08.534331  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:09:08.543412  307731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:09:08.543534  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:09:08.552561  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.561268  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:09:08.570127  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.578716  307731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:08.586804  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:09:08.595543  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:09:08.604412  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:09:08.613462  307731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:08.620008  307731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:09:08.621008  307731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:08.628262  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:08.744391  307731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:09:08.864675  307731 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:09:08.864794  307731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:09:08.868351  307731 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1202 21:09:08.868411  307731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:09:08.868454  307731 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1202 21:09:08.868480  307731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:08.868521  307731 command_runner.go:130] > Access: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868544  307731 command_runner.go:130] > Modify: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868569  307731 command_runner.go:130] > Change: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868599  307731 command_runner.go:130] >  Birth: -
	I1202 21:09:08.868892  307731 start.go:564] Will wait 60s for crictl version
	I1202 21:09:08.868989  307731 ssh_runner.go:195] Run: which crictl
	I1202 21:09:08.872054  307731 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:09:08.872553  307731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:08.897996  307731 command_runner.go:130] > Version:  0.1.0
	I1202 21:09:08.898089  307731 command_runner.go:130] > RuntimeName:  containerd
	I1202 21:09:08.898120  307731 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1202 21:09:08.898152  307731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:09:08.900685  307731 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:09:08.900802  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.918917  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.920319  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.938561  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.945896  307731 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:09:08.948895  307731 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:08.964797  307731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:08.968415  307731 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:09:08.968697  307731 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:08.968812  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:08.968871  307731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:08.989960  307731 command_runner.go:130] > {
	I1202 21:09:08.989978  307731 command_runner.go:130] >   "images":  [
	I1202 21:09:08.989982  307731 command_runner.go:130] >     {
	I1202 21:09:08.989991  307731 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:09:08.989996  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990002  307731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:09:08.990005  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990009  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990013  307731 command_runner.go:130] >       "size":  "8032639",
	I1202 21:09:08.990018  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990022  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990025  307731 command_runner.go:130] >     },
	I1202 21:09:08.990027  307731 command_runner.go:130] >     {
	I1202 21:09:08.990039  307731 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:09:08.990044  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990049  307731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:09:08.990052  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990057  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990066  307731 command_runner.go:130] >       "size":  "21166088",
	I1202 21:09:08.990071  307731 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:09:08.990075  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990078  307731 command_runner.go:130] >     },
	I1202 21:09:08.990085  307731 command_runner.go:130] >     {
	I1202 21:09:08.990092  307731 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:09:08.990096  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990101  307731 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:09:08.990104  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990108  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990112  307731 command_runner.go:130] >       "size":  "21134420",
	I1202 21:09:08.990116  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990120  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990123  307731 command_runner.go:130] >       },
	I1202 21:09:08.990126  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990130  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990133  307731 command_runner.go:130] >     },
	I1202 21:09:08.990136  307731 command_runner.go:130] >     {
	I1202 21:09:08.990143  307731 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:09:08.990147  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990156  307731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:09:08.990159  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990163  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990167  307731 command_runner.go:130] >       "size":  "24676285",
	I1202 21:09:08.990170  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990175  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990178  307731 command_runner.go:130] >       },
	I1202 21:09:08.990182  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990189  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990192  307731 command_runner.go:130] >     },
	I1202 21:09:08.990195  307731 command_runner.go:130] >     {
	I1202 21:09:08.990202  307731 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:09:08.990206  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990213  307731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:09:08.990216  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990220  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990224  307731 command_runner.go:130] >       "size":  "20658969",
	I1202 21:09:08.990227  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990231  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990233  307731 command_runner.go:130] >       },
	I1202 21:09:08.990237  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990241  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990244  307731 command_runner.go:130] >     },
	I1202 21:09:08.990246  307731 command_runner.go:130] >     {
	I1202 21:09:08.990253  307731 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:09:08.990257  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990262  307731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:09:08.990265  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990269  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990273  307731 command_runner.go:130] >       "size":  "22428165",
	I1202 21:09:08.990277  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990280  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990283  307731 command_runner.go:130] >     },
	I1202 21:09:08.990287  307731 command_runner.go:130] >     {
	I1202 21:09:08.990293  307731 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:09:08.990297  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990302  307731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:09:08.990305  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990314  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990318  307731 command_runner.go:130] >       "size":  "15389290",
	I1202 21:09:08.990322  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990329  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990332  307731 command_runner.go:130] >       },
	I1202 21:09:08.990336  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990339  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990342  307731 command_runner.go:130] >     },
	I1202 21:09:08.990345  307731 command_runner.go:130] >     {
	I1202 21:09:08.990352  307731 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:09:08.990356  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990361  307731 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:09:08.990364  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990371  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990375  307731 command_runner.go:130] >       "size":  "265458",
	I1202 21:09:08.990379  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990383  307731 command_runner.go:130] >         "value":  "65535"
	I1202 21:09:08.990386  307731 command_runner.go:130] >       },
	I1202 21:09:08.990389  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990393  307731 command_runner.go:130] >       "pinned":  true
	I1202 21:09:08.990396  307731 command_runner.go:130] >     }
	I1202 21:09:08.990402  307731 command_runner.go:130] >   ]
	I1202 21:09:08.990404  307731 command_runner.go:130] > }
	I1202 21:09:08.992021  307731 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:09:08.992044  307731 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:08.992052  307731 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:09:08.992155  307731 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:08.992222  307731 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:09:09.027109  307731 command_runner.go:130] > {
	I1202 21:09:09.027127  307731 command_runner.go:130] >   "cniconfig": {
	I1202 21:09:09.027132  307731 command_runner.go:130] >     "Networks": [
	I1202 21:09:09.027136  307731 command_runner.go:130] >       {
	I1202 21:09:09.027142  307731 command_runner.go:130] >         "Config": {
	I1202 21:09:09.027146  307731 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1202 21:09:09.027151  307731 command_runner.go:130] >           "Name": "cni-loopback",
	I1202 21:09:09.027155  307731 command_runner.go:130] >           "Plugins": [
	I1202 21:09:09.027164  307731 command_runner.go:130] >             {
	I1202 21:09:09.027168  307731 command_runner.go:130] >               "Network": {
	I1202 21:09:09.027172  307731 command_runner.go:130] >                 "ipam": {},
	I1202 21:09:09.027178  307731 command_runner.go:130] >                 "type": "loopback"
	I1202 21:09:09.027181  307731 command_runner.go:130] >               },
	I1202 21:09:09.027186  307731 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1202 21:09:09.027189  307731 command_runner.go:130] >             }
	I1202 21:09:09.027193  307731 command_runner.go:130] >           ],
	I1202 21:09:09.027203  307731 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1202 21:09:09.027207  307731 command_runner.go:130] >         },
	I1202 21:09:09.027212  307731 command_runner.go:130] >         "IFName": "lo"
	I1202 21:09:09.027215  307731 command_runner.go:130] >       }
	I1202 21:09:09.027218  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027223  307731 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1202 21:09:09.027227  307731 command_runner.go:130] >     "PluginDirs": [
	I1202 21:09:09.027230  307731 command_runner.go:130] >       "/opt/cni/bin"
	I1202 21:09:09.027234  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027238  307731 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1202 21:09:09.027242  307731 command_runner.go:130] >     "Prefix": "eth"
	I1202 21:09:09.027245  307731 command_runner.go:130] >   },
	I1202 21:09:09.027248  307731 command_runner.go:130] >   "config": {
	I1202 21:09:09.027252  307731 command_runner.go:130] >     "cdiSpecDirs": [
	I1202 21:09:09.027256  307731 command_runner.go:130] >       "/etc/cdi",
	I1202 21:09:09.027259  307731 command_runner.go:130] >       "/var/run/cdi"
	I1202 21:09:09.027263  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027266  307731 command_runner.go:130] >     "cni": {
	I1202 21:09:09.027269  307731 command_runner.go:130] >       "binDir": "",
	I1202 21:09:09.027273  307731 command_runner.go:130] >       "binDirs": [
	I1202 21:09:09.027277  307731 command_runner.go:130] >         "/opt/cni/bin"
	I1202 21:09:09.027280  307731 command_runner.go:130] >       ],
	I1202 21:09:09.027285  307731 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1202 21:09:09.027289  307731 command_runner.go:130] >       "confTemplate": "",
	I1202 21:09:09.027292  307731 command_runner.go:130] >       "ipPref": "",
	I1202 21:09:09.027300  307731 command_runner.go:130] >       "maxConfNum": 1,
	I1202 21:09:09.027304  307731 command_runner.go:130] >       "setupSerially": false,
	I1202 21:09:09.027309  307731 command_runner.go:130] >       "useInternalLoopback": false
	I1202 21:09:09.027312  307731 command_runner.go:130] >     },
	I1202 21:09:09.027321  307731 command_runner.go:130] >     "containerd": {
	I1202 21:09:09.027325  307731 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1202 21:09:09.027330  307731 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1202 21:09:09.027335  307731 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1202 21:09:09.027339  307731 command_runner.go:130] >       "runtimes": {
	I1202 21:09:09.027342  307731 command_runner.go:130] >         "runc": {
	I1202 21:09:09.027347  307731 command_runner.go:130] >           "ContainerAnnotations": null,
	I1202 21:09:09.027351  307731 command_runner.go:130] >           "PodAnnotations": null,
	I1202 21:09:09.027357  307731 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1202 21:09:09.027361  307731 command_runner.go:130] >           "cgroupWritable": false,
	I1202 21:09:09.027365  307731 command_runner.go:130] >           "cniConfDir": "",
	I1202 21:09:09.027370  307731 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1202 21:09:09.027374  307731 command_runner.go:130] >           "io_type": "",
	I1202 21:09:09.027378  307731 command_runner.go:130] >           "options": {
	I1202 21:09:09.027382  307731 command_runner.go:130] >             "BinaryName": "",
	I1202 21:09:09.027386  307731 command_runner.go:130] >             "CriuImagePath": "",
	I1202 21:09:09.027390  307731 command_runner.go:130] >             "CriuWorkPath": "",
	I1202 21:09:09.027394  307731 command_runner.go:130] >             "IoGid": 0,
	I1202 21:09:09.027398  307731 command_runner.go:130] >             "IoUid": 0,
	I1202 21:09:09.027402  307731 command_runner.go:130] >             "NoNewKeyring": false,
	I1202 21:09:09.027407  307731 command_runner.go:130] >             "Root": "",
	I1202 21:09:09.027411  307731 command_runner.go:130] >             "ShimCgroup": "",
	I1202 21:09:09.027415  307731 command_runner.go:130] >             "SystemdCgroup": false
	I1202 21:09:09.027418  307731 command_runner.go:130] >           },
	I1202 21:09:09.027424  307731 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1202 21:09:09.027430  307731 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1202 21:09:09.027434  307731 command_runner.go:130] >           "runtimePath": "",
	I1202 21:09:09.027440  307731 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1202 21:09:09.027444  307731 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1202 21:09:09.027451  307731 command_runner.go:130] >           "snapshotter": ""
	I1202 21:09:09.027455  307731 command_runner.go:130] >         }
	I1202 21:09:09.027458  307731 command_runner.go:130] >       }
	I1202 21:09:09.027461  307731 command_runner.go:130] >     },
	I1202 21:09:09.027470  307731 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1202 21:09:09.027476  307731 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1202 21:09:09.027481  307731 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1202 21:09:09.027485  307731 command_runner.go:130] >     "disableApparmor": false,
	I1202 21:09:09.027490  307731 command_runner.go:130] >     "disableHugetlbController": true,
	I1202 21:09:09.027494  307731 command_runner.go:130] >     "disableProcMount": false,
	I1202 21:09:09.027499  307731 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1202 21:09:09.027503  307731 command_runner.go:130] >     "enableCDI": true,
	I1202 21:09:09.027507  307731 command_runner.go:130] >     "enableSelinux": false,
	I1202 21:09:09.027511  307731 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1202 21:09:09.027515  307731 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1202 21:09:09.027520  307731 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1202 21:09:09.027525  307731 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1202 21:09:09.027529  307731 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1202 21:09:09.027534  307731 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1202 21:09:09.027538  307731 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1202 21:09:09.027544  307731 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027548  307731 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1202 21:09:09.027554  307731 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027558  307731 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1202 21:09:09.027563  307731 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1202 21:09:09.027566  307731 command_runner.go:130] >   },
	I1202 21:09:09.027569  307731 command_runner.go:130] >   "features": {
	I1202 21:09:09.027574  307731 command_runner.go:130] >     "supplemental_groups_policy": true
	I1202 21:09:09.027577  307731 command_runner.go:130] >   },
	I1202 21:09:09.027581  307731 command_runner.go:130] >   "golang": "go1.24.9",
	I1202 21:09:09.027591  307731 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027600  307731 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027604  307731 command_runner.go:130] >   "runtimeHandlers": [
	I1202 21:09:09.027610  307731 command_runner.go:130] >     {
	I1202 21:09:09.027614  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027619  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027623  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027626  307731 command_runner.go:130] >       }
	I1202 21:09:09.027629  307731 command_runner.go:130] >     },
	I1202 21:09:09.027631  307731 command_runner.go:130] >     {
	I1202 21:09:09.027635  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027639  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027644  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027646  307731 command_runner.go:130] >       },
	I1202 21:09:09.027650  307731 command_runner.go:130] >       "name": "runc"
	I1202 21:09:09.027653  307731 command_runner.go:130] >     }
	I1202 21:09:09.027656  307731 command_runner.go:130] >   ],
	I1202 21:09:09.027659  307731 command_runner.go:130] >   "status": {
	I1202 21:09:09.027663  307731 command_runner.go:130] >     "conditions": [
	I1202 21:09:09.027666  307731 command_runner.go:130] >       {
	I1202 21:09:09.027670  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027673  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027677  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027681  307731 command_runner.go:130] >         "type": "RuntimeReady"
	I1202 21:09:09.027685  307731 command_runner.go:130] >       },
	I1202 21:09:09.027688  307731 command_runner.go:130] >       {
	I1202 21:09:09.027694  307731 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1202 21:09:09.027699  307731 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1202 21:09:09.027703  307731 command_runner.go:130] >         "status": false,
	I1202 21:09:09.027707  307731 command_runner.go:130] >         "type": "NetworkReady"
	I1202 21:09:09.027710  307731 command_runner.go:130] >       },
	I1202 21:09:09.027713  307731 command_runner.go:130] >       {
	I1202 21:09:09.027718  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027722  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027726  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027731  307731 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1202 21:09:09.027737  307731 command_runner.go:130] >       }
	I1202 21:09:09.027740  307731 command_runner.go:130] >     ]
	I1202 21:09:09.027743  307731 command_runner.go:130] >   }
	I1202 21:09:09.027746  307731 command_runner.go:130] > }
	I1202 21:09:09.029686  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:09.029710  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:09.029745  307731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:09.029776  307731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:09.029910  307731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:09.029985  307731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:09:09.036886  307731 command_runner.go:130] > kubeadm
	I1202 21:09:09.036909  307731 command_runner.go:130] > kubectl
	I1202 21:09:09.036915  307731 command_runner.go:130] > kubelet
	I1202 21:09:09.037789  307731 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:09.037851  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:09.045467  307731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:09:09.058043  307731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:09:09.070239  307731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:09:09.082241  307731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:09.085795  307731 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:09:09.086355  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:09.208713  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:09.542492  307731 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:09:09.542524  307731 certs.go:195] generating shared ca certs ...
	I1202 21:09:09.542541  307731 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:09.542698  307731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:09:09.542757  307731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:09:09.542770  307731 certs.go:257] generating profile certs ...
	I1202 21:09:09.542908  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:09:09.542989  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:09:09.543042  307731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:09:09.543063  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:09:09.543077  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:09:09.543095  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:09:09.543113  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:09:09.543136  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:09:09.543152  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:09:09.543163  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:09:09.543181  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:09:09.543248  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:09:09.543300  307731 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:09:09.543314  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:09.543356  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:09:09.543389  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:09.543418  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:09.543492  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:09.543552  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.543576  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.543600  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem -> /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.544214  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:09.562449  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:09.579657  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:09.597016  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:09:09.615077  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:09:09.633715  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:09:09.651379  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:09.669401  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:09:09.688777  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:09:09.706718  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:09.724108  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:09:09.741960  307731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:09.754915  307731 ssh_runner.go:195] Run: openssl version
	I1202 21:09:09.760531  307731 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:09:09.760935  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:09:09.769169  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772688  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772981  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.773081  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.818276  307731 command_runner.go:130] > 3ec20f2e
	I1202 21:09:09.818787  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:09:09.826520  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:09.834827  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838656  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838686  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838739  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.879212  307731 command_runner.go:130] > b5213941
	I1202 21:09:09.879657  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:09.887484  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:09:09.895881  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899623  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899669  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899717  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.940074  307731 command_runner.go:130] > 51391683
	I1202 21:09:09.940525  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:09:09.948324  307731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951828  307731 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951867  307731 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:09:09.951875  307731 command_runner.go:130] > Device: 259,1	Inode: 1305405     Links: 1
	I1202 21:09:09.951881  307731 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:09.951888  307731 command_runner.go:130] > Access: 2025-12-02 21:05:02.335914079 +0000
	I1202 21:09:09.951894  307731 command_runner.go:130] > Modify: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951898  307731 command_runner.go:130] > Change: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951903  307731 command_runner.go:130] >  Birth: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951997  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:09:09.992474  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:09.992586  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:09:10.044870  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.045432  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:09:10.090412  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.091042  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:09:10.132690  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.133145  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:09:10.173976  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.174453  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:09:10.215639  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.216098  307731 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:10.216220  307731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:10.216321  307731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:10.242158  307731 cri.go:89] found id: ""
	I1202 21:09:10.242234  307731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:10.249118  307731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:09:10.249140  307731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:09:10.249151  307731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:09:10.250041  307731 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:09:10.250060  307731 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:09:10.250140  307731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:09:10.257350  307731 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:09:10.257790  307731 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.257903  307731 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753958" cluster setting kubeconfig missing "functional-753958" context setting]
	I1202 21:09:10.258244  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.258662  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.258838  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.259364  307731 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:09:10.259381  307731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:09:10.259386  307731 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:09:10.259392  307731 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:09:10.259397  307731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:09:10.259441  307731 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:09:10.259684  307731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:09:10.267575  307731 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:09:10.267606  307731 kubeadm.go:602] duration metric: took 17.540251ms to restartPrimaryControlPlane
	I1202 21:09:10.267616  307731 kubeadm.go:403] duration metric: took 51.535685ms to StartCluster
	I1202 21:09:10.267631  307731 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.267694  307731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.268283  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.268485  307731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:09:10.268816  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:10.268866  307731 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:09:10.268984  307731 addons.go:70] Setting storage-provisioner=true in profile "functional-753958"
	I1202 21:09:10.269003  307731 addons.go:239] Setting addon storage-provisioner=true in "functional-753958"
	I1202 21:09:10.269024  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.269023  307731 addons.go:70] Setting default-storageclass=true in profile "functional-753958"
	I1202 21:09:10.269176  307731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-753958"
	I1202 21:09:10.269690  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.269905  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.274878  307731 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:10.279673  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:10.309974  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.310183  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.310507  307731 addons.go:239] Setting addon default-storageclass=true in "functional-753958"
	I1202 21:09:10.310544  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.311034  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.322713  307731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:10.325707  307731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.325729  307731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:10.325795  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.357829  307731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:10.357850  307731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:10.357914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.371695  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.400329  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.499296  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:10.516631  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.547824  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.294654  307731 node_ready.go:35] waiting up to 6m0s for node "functional-753958" to be "Ready" ...
	I1202 21:09:11.294774  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.294779  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.294839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.295227  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.295315  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295463  307731 retry.go:31] will retry after 210.924688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:11.295364  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295550  307731 retry.go:31] will retry after 203.437895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.500110  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:11.506791  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.578640  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.581915  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.581967  307731 retry.go:31] will retry after 400.592485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595609  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.595676  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595708  307731 retry.go:31] will retry after 422.737023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.794907  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:11.982828  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.018958  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.086246  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.086287  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.086307  307731 retry.go:31] will retry after 564.880189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117100  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.117143  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117191  307731 retry.go:31] will retry after 637.534191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.295409  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.295483  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.295805  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.652365  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.710471  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.710580  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.710622  307731 retry.go:31] will retry after 876.325619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.755731  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.795162  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.795277  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.795599  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.835060  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.835099  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.835118  307731 retry.go:31] will retry after 1.227832404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.295855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.295948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.296269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:13.296338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:13.587806  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:13.646676  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:13.646721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.646742  307731 retry.go:31] will retry after 1.443838067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.795158  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.795236  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.795586  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.064081  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:14.123819  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:14.127173  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.127215  307731 retry.go:31] will retry after 1.221247817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.295601  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.295675  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.295968  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.795874  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.796179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.091734  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:15.151479  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.151525  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.151546  307731 retry.go:31] will retry after 1.850953854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.349587  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:15.413525  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.416721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.416752  307731 retry.go:31] will retry after 1.691274377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.795307  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.795621  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:15.795696  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:16.295456  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.295552  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.295874  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:16.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.795755  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.796091  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.003193  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:17.061077  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.064289  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.064321  307731 retry.go:31] will retry after 2.076549374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.108496  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:17.168660  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.168709  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.168731  307731 retry.go:31] will retry after 3.158627903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.295738  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.295812  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.296081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.794893  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:18.294955  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.295057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.295390  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:18.295447  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:18.795090  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.795156  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.795510  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.141123  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:19.199068  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:19.202437  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.202469  307731 retry.go:31] will retry after 2.729492901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.295833  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.295905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.296241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.795344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:20.295255  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.295325  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.295687  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:20.295737  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:20.327882  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:20.391902  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:20.391939  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.391960  307731 retry.go:31] will retry after 4.367650264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.795532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.795920  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.295837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.295923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.296260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.932718  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:21.990698  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:21.990736  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:21.990761  307731 retry.go:31] will retry after 5.196584204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:22.295359  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.295443  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.295788  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:22.295845  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:22.795464  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.795562  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.795917  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.295669  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.295739  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.296001  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.795753  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.796151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.295815  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.295890  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.296207  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:24.296265  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:24.759924  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:24.795570  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.795642  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.795905  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.817214  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:24.821374  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:24.821411  307731 retry.go:31] will retry after 3.851570628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:25.294967  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.295041  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:25.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.295350  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.295431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.795297  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.795366  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.795685  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:26.795740  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:27.188447  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.254238  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:27.254282  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.254304  307731 retry.go:31] will retry after 6.785596085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.295437  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.295865  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:27.794985  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.795057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.294999  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.295384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.674112  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:28.734788  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:28.734834  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.734853  307731 retry.go:31] will retry after 5.470614597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:29.295607  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.295683  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.296024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:29.296105  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:29.794837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.795239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.295136  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.295232  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.295003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.795007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.795289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:31.795338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:32.295580  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.295653  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.295944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:32.795804  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.796241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.294972  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.295049  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.794828  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.794899  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.795152  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:34.040709  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:34.103827  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.103870  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.103890  307731 retry.go:31] will retry after 13.233422448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.206146  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:34.265937  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.265992  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.266011  307731 retry.go:31] will retry after 9.178751123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.295270  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.295377  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.295751  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:34.295808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:34.795590  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.795669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.295449  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.295792  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.795609  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.795985  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.295235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.795205  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.795285  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.795563  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:36.795617  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:37.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:37.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.294875  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.295216  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.794960  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:39.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.295474  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:39.295528  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:39.795755  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.795827  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.796097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.295759  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.295831  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.296122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.794848  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.794921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.795244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.294965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.295255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.794953  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.795034  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:41.795415  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:42.295127  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.295208  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.295661  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:42.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.795105  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.294952  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.295026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.445783  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:43.508150  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:43.508187  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.508208  307731 retry.go:31] will retry after 18.255533178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.795638  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.795730  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:43.796132  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:44.295329  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.295407  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.295673  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:44.795488  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.795564  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.795884  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.295740  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.295822  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.795177  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:46.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.295418  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:46.295474  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:46.795131  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.795532  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.295250  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.295339  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.295611  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.337905  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:47.398412  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:47.398459  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.398478  307731 retry.go:31] will retry after 28.802230035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.794980  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.795304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:48.795347  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:49.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.295302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:49.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.295374  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.295672  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.795457  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.795527  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.795850  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:50.795908  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:51.295904  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.295977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:51.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.794969  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.795305  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.295341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:53.295540  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.295885  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:53.295930  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:53.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.295722  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.296147  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.795496  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:55.295532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.295606  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:55.295984  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:55.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.795835  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.796153  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.294998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.295324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.295124  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.295200  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.295537  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.795227  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.795291  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.795605  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:57.795689  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:58.295413  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.295489  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.295818  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:58.795617  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.295296  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.295368  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.295623  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.794983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.795300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:00.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.295398  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.295706  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:00.295756  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:00.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.795832  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.796237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.295081  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.295430  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.763971  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:01.794859  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.794929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.795196  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.835916  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:01.839908  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:01.839940  307731 retry.go:31] will retry after 30.677466671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:02.295717  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.295826  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.296209  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:02.296289  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:02.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.795406  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.295176  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.295453  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.794940  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.795026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.795356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.295114  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.295196  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.295536  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:04.796171  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:05.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.294934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.295264  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:05.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.295514  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.295601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.295881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.795664  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.795741  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.796081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:07.294800  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.294876  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:07.295261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:07.795446  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.795518  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.795780  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.295543  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.295937  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.795803  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.795884  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.796321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:09.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:09.295304  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:09.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.795434  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.295294  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.295369  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.295705  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.795493  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.795577  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.795953  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:11.295781  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.295870  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:11.296268  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:11.794950  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.795368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.295128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.294954  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.795197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:13.795238  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:14.294945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.295037  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.295425  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:14.795147  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.795224  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.795562  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.295257  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.295338  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.295612  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:15.795380  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:16.200937  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:16.256562  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:16.259927  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.259959  307731 retry.go:31] will retry after 18.923209073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.295107  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.295189  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.295558  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:16.794811  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.295260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.795031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:18.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.295206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:18.295258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:18.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.295370  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.794970  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:20.295271  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.295345  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.295682  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:20.295746  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:20.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.795586  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.795908  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.295457  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.295714  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.795556  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.795949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:22.295726  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.296133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:22.296198  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:22.795455  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.795537  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.795801  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.295679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.296049  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.795807  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.796143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.294826  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.294902  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.295188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.795928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.796234  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:24.796284  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:25.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.294948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:25.794855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.795034  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.795438  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:27.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:27.295395  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:27.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.794984  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:29.294923  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:29.295418  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:29.795094  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.795169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.795504  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.295472  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.295550  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.295841  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.795750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.796084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.294839  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.295203  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.794808  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:31.795189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:32.294872  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:32.517612  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:32.588466  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591823  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591933  307731 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:10:32.795459  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.795532  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.795852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.295131  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.295202  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.794891  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.794962  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:33.795314  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:34.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:34.795003  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.795074  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.795374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.183965  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:35.239016  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:35.242188  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.242221  307731 retry.go:31] will retry after 25.961571555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.295555  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.295639  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.295975  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.796134  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:35.796175  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:36.295019  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.295091  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.295347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:36.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.795019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.295060  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.795743  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.795817  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:38.295875  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.295951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.296303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:38.296363  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:38.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.295633  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.295705  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.795820  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.795894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:40.295848  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.295936  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.296337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:40.296429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:40.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.795169  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.294994  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.795377  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.295192  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.795316  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.795641  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:42.795694  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:43.295520  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.295594  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:43.795644  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.795714  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.795981  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.295768  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.295846  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.296173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.794885  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:45.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:45.295340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:45.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.295077  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.295153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.795257  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.795513  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.795380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:47.795437  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:48.295512  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.295579  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.295842  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:48.795623  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.795698  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.295731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.296154  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.795443  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.795545  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.795873  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:49.795941  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:50.295652  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.295726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.296078  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:50.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.795808  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.796159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.295466  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.295534  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.295787  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.795602  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.795679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.796007  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:51.796073  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:52.295850  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:52.794970  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.795045  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.294905  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.794897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.794971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:54.295102  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.295441  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:54.295539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:54.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.295052  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.795785  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.795851  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.796131  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.295386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.795230  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.795573  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:56.795626  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:57.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.294906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.295200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:57.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.795292  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.294897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.294972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.795026  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.795092  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:59.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.295353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:59.295412  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:59.795027  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.795102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.795393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.298236  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.298341  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.298735  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.795120  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.795194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:01.204061  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:11:01.267039  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267090  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267174  307731 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:11:01.270170  307731 out.go:179] * Enabled addons: 
	I1202 21:11:01.273921  307731 addons.go:530] duration metric: took 1m51.005043213s for enable addons: enabled=[]
	I1202 21:11:01.295263  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.295359  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.295653  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:01.295706  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:01.795541  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.795971  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.295791  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.295861  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.294886  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.795031  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.795108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.795398  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:03.795445  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:04.295135  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.295207  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.295489  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:04.797772  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.797855  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.798166  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.795024  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.795114  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.795840  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:05.795891  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:06.295375  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.295699  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:06.795562  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.795637  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.295853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.795390  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.795462  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.795723  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:08.295538  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.295622  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.295961  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:08.296019  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:08.795765  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.795839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.796212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.295353  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.295424  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.295732  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.795220  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.795301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.795760  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:10.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.295830  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:10.296275  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:10.794847  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.794927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.795204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.295063  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.295142  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.295478  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.795260  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.795582  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.294899  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.295257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:12.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:13.295067  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.295150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.295484  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:13.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:15.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.295024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:15.295317  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:15.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.794897  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.295258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:17.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:18.294837  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.295270  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:18.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.295034  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.295446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.795197  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.795502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:19.795550  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:20.295498  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.295582  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.295890  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:20.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.795745  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.796070  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.294797  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.294862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.295106  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.795856  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.795927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.796206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:21.796258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.295002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.295336  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:22.795444  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.795821  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.295637  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.295716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.296030  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.795816  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.795911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.796220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:24.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.295400  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:24.295449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:24.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.795056  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.294949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.295023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.295327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.795726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.795991  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.294874  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.295297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.795064  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:26.795449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:27.295101  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.295170  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.295451  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:27.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.795354  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.795573  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.795646  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:28.795938  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:29.295736  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.296135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:29.794877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.295169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:31.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:31.295398  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:31.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.795188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.294898  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.295108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:33.795429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:34.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.294989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:34.795011  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.795087  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.795342  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.295337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.795066  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.795146  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.795473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:35.795529  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:36.295315  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.295394  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.295654  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:36.795469  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.795546  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.795881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.295777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.296183  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.795356  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.795431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.795698  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:37.795750  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:38.295449  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.295517  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:38.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.795731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.295366  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.295436  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.295758  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.795668  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:39.796055  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:40.294852  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.295284  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:40.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.794934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.795237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.295084  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.295163  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.295481  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:42.295575  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.295656  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.295978  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:42.296030  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:42.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.795869  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.796202  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.295844  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.295922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.296257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.795435  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.795509  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.795804  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:44.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.295700  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.296029  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:44.296112  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:44.794813  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.794887  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.295025  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.295309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.295180  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.295255  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.295594  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.795733  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.795806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:46.796126  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:47.294799  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.294879  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.295242  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:47.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.795217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.295217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.795020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:49.294951  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.295028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:49.295407  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:49.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.795752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.796093  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.295858  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.296181  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.795022  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.795327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.294892  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.294961  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.795369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:51.795425  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:52.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.295183  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:52.795678  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.796004  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.295812  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.295892  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.296208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.795862  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.795942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.796296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:53.796344  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:54.294832  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.295145  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:54.794887  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.794967  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.795291  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.294921  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.295281  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.795485  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.795558  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.795809  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:56.295721  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.295797  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.296098  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:56.296148  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:56.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.295605  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.295673  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.295938  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.795802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.796121  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.294836  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.294913  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.794825  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.794896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:58.795190  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:59.294877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.295267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:59.794990  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.795067  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.795410  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.308704  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.308789  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.309104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.794956  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:00.795332  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:01.294968  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.295473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:01.794941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.294948  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.295043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:02.795477  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:03.295203  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.295281  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.295626  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:03.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.795507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.795802  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.295252  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.295319  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.295618  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.795525  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.795601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:04.796063  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:05.295831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.295911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:05.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.794932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.795240  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.294953  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.295030  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.295362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.795000  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.795417  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:07.294864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.295204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:07.295255  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:07.794954  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.295005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.795520  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.795777  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:09.295573  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.295651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.295959  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:09.296007  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:09.795632  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.795716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.295748  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.295818  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.795857  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.795938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.796244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.294935  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.795294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:11.795346  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:12.294901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.294921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.295173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:13.795372  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:14.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:14.795456  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.795525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.795875  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.295665  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.295750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.296104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.795269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:16.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.294937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.295192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:16.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:16.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.795044  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.295380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.796052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:18.295868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.295939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.296239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:18.296278  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:18.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.794946  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.795296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.295053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.794942  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.295100  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.795156  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.795229  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:20.795539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:21.295446  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.295525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.295851  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:21.795679  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.795753  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.796086  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.295843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.295924  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.296179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.794901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:23.294961  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.295032  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.295330  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:23.295374  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:23.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.795578  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.795879  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.295535  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.295609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.295949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.795766  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.795904  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.796239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.294881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.295138  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.794905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.795241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:25.795296  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:26.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:26.795516  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.795596  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.795868  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.295674  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.295752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.795851  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.795930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.796225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:27.796269  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:28.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.295262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:28.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.294980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.295344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.795043  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.795431  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:30.295491  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.295854  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:30.295900  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:30.795622  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.795701  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.295224  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.295021  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.295094  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.295426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.795261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:32.795313  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:33.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.294988  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.295274  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:33.794936  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.294978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.295357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:34.795369  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:35.294958  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:35.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.795262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.294964  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.295312  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.795319  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:37.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.294923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:37.295279  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:37.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.795062  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.295093  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.295216  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.295502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.795751  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.795829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.796083  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.295189  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.794951  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:39.795360  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:40.295208  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.295278  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.295547  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:40.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.795303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.295146  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.295226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.295541  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.795842  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.795912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.796200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:41.796251  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:42.294960  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.295487  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:42.795065  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.795138  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.295783  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.794833  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:44.294965  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.295055  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.295393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:44.295450  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:44.794874  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.794942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.295214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.295767  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.795233  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.795311  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.795638  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:46.295163  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.295245  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.295588  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:46.295650  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:46.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.295090  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.295497  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.794869  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.294861  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.295271  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.794940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:48.795342  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:49.294843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.294911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.295164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:49.794846  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.794949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.295060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.795576  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.795648  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.795900  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:50.795939  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:51.295852  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.295925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.296265  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:51.794918  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.795350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.294960  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.295280  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.795353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:53.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.295126  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.295420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:53.295466  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:53.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.794894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.295276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.795118  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.294835  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.294908  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.295159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.794871  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.794955  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:55.795414  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:56.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.295014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.295303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:56.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.794965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:58.294803  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.294871  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.295161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:58.295224  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:58.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.795009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.795298  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.295027  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.295104  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.794850  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.795190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:00.295830  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.295907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.296237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:00.296286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:00.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.295254  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.795411  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.795414  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.795493  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:02.795808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:03.295626  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.295706  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.296056  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:03.795867  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.795947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.796294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.294876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.295212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.794889  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.795297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:05.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.295111  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.295416  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:05.295461  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:05.795108  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.295448  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.295528  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.296185  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.794905  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.795346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:07.295651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.295719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.296051  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:07.296110  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:07.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.795926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.796263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.294869  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.795548  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.795627  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.795895  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:09.295682  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.295756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.296097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:09.296151  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:09.794843  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.795258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.295332  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.295413  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.795553  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.796008  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:11.295865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.295935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.296253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:11.296301  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:11.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.796123  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.795041  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.795119  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.795456  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.295760  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.296010  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.795805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.796135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:13.796187  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:14.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:14.795004  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.795086  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.794965  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.795420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:16.294820  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:16.295299  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:16.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.795324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.795483  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.795554  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.795826  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:18.295595  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.295669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.296052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:18.296108  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:18.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.795799  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.796125  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.295390  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.295507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.295770  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.795944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:20.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.295849  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.296214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:20.296270  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:20.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.795888  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.295858  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.296299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.795128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.795467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.295167  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.295249  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:22.795386  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:23.294912  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.295388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:23.795649  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.795757  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.295857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.295930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.296228  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.794835  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.795214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:25.294914  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.295261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:25.295309  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:25.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.795129  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.795387  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:27.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.295010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:27.295406  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:27.795057  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.795135  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.294926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.295180  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.795601  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.795676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.796027  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:29.295836  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.295912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.296231  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:29.296292  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:29.794829  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.794900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.795151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.295730  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.296126  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.794915  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.795249  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:31.297776  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.297853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.298178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:31.298228  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:31.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.295433  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.795735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.795800  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.796165  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.295304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.795016  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.795321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:33.795370  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:34.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.295184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:34.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.794945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.294879  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.294959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.794925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.795178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:36.294917  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.294991  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:36.295373  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:36.795049  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.295735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.295805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.296066  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.795800  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.795873  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.796213  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:38.295712  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.295790  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.296136  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:38.296189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:38.794842  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.795163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.294845  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.294918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.295048  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.295117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.794900  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:40.795390  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:41.294896  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.294974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.295282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:41.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.795060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.795375  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.295106  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.295194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.794935  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.795335  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:43.294846  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.294916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.295163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:43.295211  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:43.794883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.795651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.795720  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.795982  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:45.295757  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.295838  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.296285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:45.296345  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:45.795035  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.795117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.795459  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.295269  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.295336  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.794865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.795193  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:47.795233  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:48.294918  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:48.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.795165  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.795501  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.295207  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.295288  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.295554  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.795252  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.795322  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.795632  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:49.795684  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:50.295531  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.295604  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.295957  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:50.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.795608  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.796073  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.295825  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.296243  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.795338  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:52.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.294945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.295299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:52.295371  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:52.794884  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.295056  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.295127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.295442  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.794868  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.795222  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.294840  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.294920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.795316  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:54.795366  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:55.295563  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.295904  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:55.795697  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.795777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.796113  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.294907  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.794944  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.795238  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:57.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.295369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:57.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:57.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.795051  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.294806  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.294875  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.295122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.794910  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.795229  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.295361  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.795054  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.795386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:59.795436  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:00.295787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.295877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:00.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.795357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.294857  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.295226  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.795098  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.795437  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:01.795499  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:02.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.295413  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:02.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.795756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.796018  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.295834  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.295906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.296221  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:04.295602  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.295676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.296005  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:04.296061  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:04.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.795363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.295123  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.295457  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.795144  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.295374  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.295743  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.795004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.795340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:06.795401  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:07.295611  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.295678  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:07.795672  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.795746  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.796102  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.295447  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.295852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.795225  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.795296  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.795548  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:08.795589  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:09.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.295329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:09.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.295213  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.295283  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.295555  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.794913  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.794989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.795326  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:11.294894  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.294973  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.295333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:11.295391  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:11.794858  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.794926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.795184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.794998  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.795409  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:13.295664  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.295731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:13.296034  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:13.795754  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.294862  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.795651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.795954  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:15.295756  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.295834  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.296219  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:15.296293  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.794990  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.294940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.795371  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.295083  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.295533  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.795809  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.795877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.796133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:17.796172  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:18.294860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:18.794860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.295036  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.295289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.794925  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.795302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:20.295739  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.296151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:20.296213  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:20.795440  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.795763  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.295657  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.295765  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.296103  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.795787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.795862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.796230  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.294978  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:22.795368  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:23.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.295000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.295323  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:23.795654  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.795728  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.795993  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.295761  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.295839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.296161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.794986  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:25.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.294935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.295190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:25.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:25.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.295020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.295383  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.795713  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.795787  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.796101  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:27.294827  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.294901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:27.295286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:27.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.794916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.294946  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.295278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.795366  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:29.295050  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:29.295536  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:29.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.795842  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.796119  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.295026  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.295100  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.295424  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.795136  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.795210  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:31.295356  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.295420  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.295666  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:31.295705  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:31.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.795523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.295545  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.295621  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.295915  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.795216  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.795294  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.795032  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.795113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.795460  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:33.795521  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:34.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.295175  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:34.794878  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.794952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.795309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.295020  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.295113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.295444  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.795727  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.795796  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.796110  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:35.796169  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:36.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.295256  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:36.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.795012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.794972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:38.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:38.295411  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:38.795081  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.795152  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.295318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.795068  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:40.295494  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:40.295880  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:40.795619  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.795692  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.796024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.795647  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.795719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:42.295806  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.295896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.296282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:42.296340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:42.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.795349  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.295078  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.295167  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.295472  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.794967  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.795039  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.295072  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.295155  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.795154  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.795226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.795482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:44.795526  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:45.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.295173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.295911  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:45.795805  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.796164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.795381  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:47.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:47.295354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:47.794995  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.795072  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.294900  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.794879  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.794954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.294865  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.795293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:49.795354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:50.295309  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.295381  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.295715  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:50.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.295008  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.295359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.795069  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:51.795574  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:52.295227  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.295298  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.295552  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:52.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.295449  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.795218  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:54.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:54.295377  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:54.795076  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.795150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.295170  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.295241  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.295544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:56.295064  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.295148  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.295496  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:56.295551  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:56.795788  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.795881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.796235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.295029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.795157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.795491  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:58.295762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.295829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.296084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:58.296124  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:58.795828  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.795901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.796192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.294890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.294971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.795662  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.294841  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.295288  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:00.795520  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:01.295565  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:01.795669  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.794984  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.795058  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.795384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:03.294942  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:03.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:03.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.295082  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.295157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.295429  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.795043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.795426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.795116  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.795195  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.795515  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:05.795575  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:06.295370  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.295451  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.295771  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:06.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.795617  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.795962  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.295701  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.295775  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.296023  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.795795  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.795872  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:07.796261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:08.294940  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:08.794862  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.794931  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.295352  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.795162  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.795514  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:10.299197  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.299301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.299703  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:10.299761  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:10.795524  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.795615  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:11.294844  307731 node_ready.go:38] duration metric: took 6m0.000140797s for node "functional-753958" to be "Ready" ...
	I1202 21:15:11.298019  307731 out.go:203] 
	W1202 21:15:11.300907  307731 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:15:11.300927  307731 out.go:285] * 
	* 
	W1202 21:15:11.303086  307731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:15:11.306181  307731 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-753958 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.958458113s for "functional-753958" cluster.
I1202 21:15:11.775347  263241 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (369.839459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/2632412.pem                                                                                                       │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /usr/share/ca-certificates/2632412.pem                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/test/nested/copy/263241/hosts                                                                                               │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save kicbase/echo-server:functional-446665 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image rm kicbase/echo-server:functional-446665 --alsologtostderr                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format yaml --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format short --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format json --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format table --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh pgrep buildkitd                                                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image          │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                          │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete         │ -p functional-446665                                                                                                                                            │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start          │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start          │ -p functional-753958 --alsologtostderr -v=8                                                                                                                     │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:09:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:09:05.869127  307731 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:09:05.869342  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.869372  307731 out.go:374] Setting ErrFile to fd 2...
	I1202 21:09:05.869392  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.870120  307731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:09:05.870642  307731 out.go:368] Setting JSON to false
	I1202 21:09:05.871532  307731 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10284,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:09:05.871698  307731 start.go:143] virtualization:  
	I1202 21:09:05.875240  307731 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:09:05.878196  307731 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:09:05.878269  307731 notify.go:221] Checking for updates...
	I1202 21:09:05.884072  307731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:09:05.886942  307731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:05.889899  307731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:09:05.892813  307731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:09:05.895771  307731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:09:05.899217  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:05.899365  307731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:09:05.932799  307731 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:09:05.932919  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:05.993966  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:05.984741651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:05.994072  307731 docker.go:319] overlay module found
	I1202 21:09:05.997248  307731 out.go:179] * Using the docker driver based on existing profile
	I1202 21:09:06.000038  307731 start.go:309] selected driver: docker
	I1202 21:09:06.000060  307731 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.000154  307731 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:09:06.000264  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:06.066709  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:06.057768194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:06.067144  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:06.067209  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:06.067263  307731 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.070421  307731 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:09:06.073261  307731 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:09:06.078117  307731 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:09:06.080953  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:06.081041  307731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:09:06.101516  307731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:09:06.101541  307731 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:09:06.138751  307731 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:09:06.314468  307731 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:09:06.314628  307731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:09:06.314753  307731 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314852  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:09:06.314868  307731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.02µs
	I1202 21:09:06.314884  307731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:09:06.314900  307731 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314935  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:09:06.314945  307731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.735µs
	I1202 21:09:06.314952  307731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:09:06.314968  307731 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315000  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:09:06.315009  307731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.764µs
	I1202 21:09:06.315016  307731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315030  307731 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315059  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:09:06.315069  307731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.875µs
	I1202 21:09:06.315075  307731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315089  307731 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315119  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:09:06.315127  307731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.629µs
	I1202 21:09:06.315144  307731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315143  307731 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:09:06.315177  307731 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315202  307731 start.go:364] duration metric: took 13.3µs to acquireMachinesLock for "functional-753958"
	I1202 21:09:06.315219  307731 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:09:06.315230  307731 fix.go:54] fixHost starting: 
	I1202 21:09:06.315183  307731 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315307  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:09:06.315332  307731 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 153.571µs
	I1202 21:09:06.315357  307731 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:09:06.315387  307731 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315443  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:09:06.315465  307731 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.424µs
	I1202 21:09:06.315488  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:06.315527  307731 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315588  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:09:06.315619  307731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 95.488µs
	I1202 21:09:06.315640  307731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:09:06.315489  307731 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:09:06.315801  307731 cache.go:87] Successfully saved all images to host disk.
	I1202 21:09:06.333736  307731 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:09:06.333771  307731 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:09:06.337175  307731 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:09:06.337206  307731 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:06.337301  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.354474  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.354810  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.354830  307731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:06.501197  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.501220  307731 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:09:06.501288  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.519375  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.519710  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.519727  307731 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:09:06.687724  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.687814  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.707419  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.707758  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.707780  307731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:06.858340  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:06.858365  307731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:09:06.858387  307731 ubuntu.go:190] setting up certificates
	I1202 21:09:06.858407  307731 provision.go:84] configureAuth start
	I1202 21:09:06.858472  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:06.877925  307731 provision.go:143] copyHostCerts
	I1202 21:09:06.877980  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878020  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:09:06.878036  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878121  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:09:06.878219  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878244  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:09:06.878253  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878283  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:09:06.878341  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878361  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:09:06.878366  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878392  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:09:06.878454  307731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:09:07.212788  307731 provision.go:177] copyRemoteCerts
	I1202 21:09:07.212871  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:07.212914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.229990  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.334622  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:09:07.334690  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:09:07.358156  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:09:07.358212  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:07.374829  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:09:07.374936  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:09:07.391856  307731 provision.go:87] duration metric: took 533.420534ms to configureAuth
	I1202 21:09:07.391883  307731 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:07.392075  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:07.392088  307731 machine.go:97] duration metric: took 1.054874904s to provisionDockerMachine
	I1202 21:09:07.392096  307731 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:09:07.392108  307731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:07.392158  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:07.392201  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.409892  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.513929  307731 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:07.517313  307731 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:09:07.517377  307731 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:09:07.517399  307731 command_runner.go:130] > VERSION_ID="12"
	I1202 21:09:07.517411  307731 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:09:07.517423  307731 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:09:07.517428  307731 command_runner.go:130] > ID=debian
	I1202 21:09:07.517432  307731 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:09:07.517437  307731 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:09:07.517460  307731 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:09:07.517505  307731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:07.517555  307731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:07.517574  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:09:07.517638  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:09:07.517741  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:09:07.517755  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /etc/ssl/certs/2632412.pem
	I1202 21:09:07.517830  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:09:07.517839  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> /etc/test/nested/copy/263241/hosts
	I1202 21:09:07.517882  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:09:07.525639  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:07.543648  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:09:07.560944  307731 start.go:296] duration metric: took 168.831988ms for postStartSetup
	I1202 21:09:07.561067  307731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:07.561116  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.579622  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.682695  307731 command_runner.go:130] > 12%
	I1202 21:09:07.682778  307731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:07.687210  307731 command_runner.go:130] > 172G
	I1202 21:09:07.687707  307731 fix.go:56] duration metric: took 1.372471826s for fixHost
	I1202 21:09:07.687729  307731 start.go:83] releasing machines lock for "functional-753958", held for 1.372515567s
	I1202 21:09:07.687799  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:07.704780  307731 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:07.704833  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.704860  307731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:07.704931  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.726613  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.737148  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.829144  307731 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:09:07.829307  307731 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:07.919742  307731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:09:07.919788  307731 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:09:07.919811  307731 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:09:07.919883  307731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:09:07.924332  307731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:09:07.924495  307731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:07.924590  307731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:07.932451  307731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:09:07.932475  307731 start.go:496] detecting cgroup driver to use...
	I1202 21:09:07.932505  307731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:07.932553  307731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:09:07.947902  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:09:07.964330  307731 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:07.964400  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:07.980760  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:07.995134  307731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:08.122567  307731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:08.232585  307731 docker.go:234] disabling docker service ...
	I1202 21:09:08.232660  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:08.247806  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:08.260075  307731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:08.380227  307731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:08.498586  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:08.511975  307731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:08.525630  307731 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1202 21:09:08.525792  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:09:08.534331  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:09:08.543412  307731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:09:08.543534  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:09:08.552561  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.561268  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:09:08.570127  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.578716  307731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:08.586804  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:09:08.595543  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:09:08.604412  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:09:08.613462  307731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:08.620008  307731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:09:08.621008  307731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:08.628262  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:08.744391  307731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:09:08.864675  307731 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:09:08.864794  307731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:09:08.868351  307731 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1202 21:09:08.868411  307731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:09:08.868454  307731 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1202 21:09:08.868480  307731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:08.868521  307731 command_runner.go:130] > Access: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868544  307731 command_runner.go:130] > Modify: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868569  307731 command_runner.go:130] > Change: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868599  307731 command_runner.go:130] >  Birth: -
	I1202 21:09:08.868892  307731 start.go:564] Will wait 60s for crictl version
	I1202 21:09:08.868989  307731 ssh_runner.go:195] Run: which crictl
	I1202 21:09:08.872054  307731 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:09:08.872553  307731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:08.897996  307731 command_runner.go:130] > Version:  0.1.0
	I1202 21:09:08.898089  307731 command_runner.go:130] > RuntimeName:  containerd
	I1202 21:09:08.898120  307731 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1202 21:09:08.898152  307731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:09:08.900685  307731 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:09:08.900802  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.918917  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.920319  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.938561  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.945896  307731 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:09:08.948895  307731 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:08.964797  307731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:08.968415  307731 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:09:08.968697  307731 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:08.968812  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:08.968871  307731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:08.989960  307731 command_runner.go:130] > {
	I1202 21:09:08.989978  307731 command_runner.go:130] >   "images":  [
	I1202 21:09:08.989982  307731 command_runner.go:130] >     {
	I1202 21:09:08.989991  307731 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:09:08.989996  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990002  307731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:09:08.990005  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990009  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990013  307731 command_runner.go:130] >       "size":  "8032639",
	I1202 21:09:08.990018  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990022  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990025  307731 command_runner.go:130] >     },
	I1202 21:09:08.990027  307731 command_runner.go:130] >     {
	I1202 21:09:08.990039  307731 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:09:08.990044  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990049  307731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:09:08.990052  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990057  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990066  307731 command_runner.go:130] >       "size":  "21166088",
	I1202 21:09:08.990071  307731 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:09:08.990075  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990078  307731 command_runner.go:130] >     },
	I1202 21:09:08.990085  307731 command_runner.go:130] >     {
	I1202 21:09:08.990092  307731 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:09:08.990096  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990101  307731 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:09:08.990104  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990108  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990112  307731 command_runner.go:130] >       "size":  "21134420",
	I1202 21:09:08.990116  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990120  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990123  307731 command_runner.go:130] >       },
	I1202 21:09:08.990126  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990130  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990133  307731 command_runner.go:130] >     },
	I1202 21:09:08.990136  307731 command_runner.go:130] >     {
	I1202 21:09:08.990143  307731 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:09:08.990147  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990156  307731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:09:08.990159  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990163  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990167  307731 command_runner.go:130] >       "size":  "24676285",
	I1202 21:09:08.990170  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990175  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990178  307731 command_runner.go:130] >       },
	I1202 21:09:08.990182  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990189  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990192  307731 command_runner.go:130] >     },
	I1202 21:09:08.990195  307731 command_runner.go:130] >     {
	I1202 21:09:08.990202  307731 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:09:08.990206  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990213  307731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:09:08.990216  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990220  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990224  307731 command_runner.go:130] >       "size":  "20658969",
	I1202 21:09:08.990227  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990231  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990233  307731 command_runner.go:130] >       },
	I1202 21:09:08.990237  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990241  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990244  307731 command_runner.go:130] >     },
	I1202 21:09:08.990246  307731 command_runner.go:130] >     {
	I1202 21:09:08.990253  307731 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:09:08.990257  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990262  307731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:09:08.990265  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990269  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990273  307731 command_runner.go:130] >       "size":  "22428165",
	I1202 21:09:08.990277  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990280  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990283  307731 command_runner.go:130] >     },
	I1202 21:09:08.990287  307731 command_runner.go:130] >     {
	I1202 21:09:08.990293  307731 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:09:08.990297  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990302  307731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:09:08.990305  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990314  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990318  307731 command_runner.go:130] >       "size":  "15389290",
	I1202 21:09:08.990322  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990329  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990332  307731 command_runner.go:130] >       },
	I1202 21:09:08.990336  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990339  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990342  307731 command_runner.go:130] >     },
	I1202 21:09:08.990345  307731 command_runner.go:130] >     {
	I1202 21:09:08.990352  307731 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:09:08.990356  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990361  307731 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:09:08.990364  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990371  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990375  307731 command_runner.go:130] >       "size":  "265458",
	I1202 21:09:08.990379  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990383  307731 command_runner.go:130] >         "value":  "65535"
	I1202 21:09:08.990386  307731 command_runner.go:130] >       },
	I1202 21:09:08.990389  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990393  307731 command_runner.go:130] >       "pinned":  true
	I1202 21:09:08.990396  307731 command_runner.go:130] >     }
	I1202 21:09:08.990402  307731 command_runner.go:130] >   ]
	I1202 21:09:08.990404  307731 command_runner.go:130] > }
	I1202 21:09:08.992021  307731 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:09:08.992044  307731 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:08.992052  307731 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:09:08.992155  307731 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:08.992222  307731 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:09:09.027109  307731 command_runner.go:130] > {
	I1202 21:09:09.027127  307731 command_runner.go:130] >   "cniconfig": {
	I1202 21:09:09.027132  307731 command_runner.go:130] >     "Networks": [
	I1202 21:09:09.027136  307731 command_runner.go:130] >       {
	I1202 21:09:09.027142  307731 command_runner.go:130] >         "Config": {
	I1202 21:09:09.027146  307731 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1202 21:09:09.027151  307731 command_runner.go:130] >           "Name": "cni-loopback",
	I1202 21:09:09.027155  307731 command_runner.go:130] >           "Plugins": [
	I1202 21:09:09.027164  307731 command_runner.go:130] >             {
	I1202 21:09:09.027168  307731 command_runner.go:130] >               "Network": {
	I1202 21:09:09.027172  307731 command_runner.go:130] >                 "ipam": {},
	I1202 21:09:09.027178  307731 command_runner.go:130] >                 "type": "loopback"
	I1202 21:09:09.027181  307731 command_runner.go:130] >               },
	I1202 21:09:09.027186  307731 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1202 21:09:09.027189  307731 command_runner.go:130] >             }
	I1202 21:09:09.027193  307731 command_runner.go:130] >           ],
	I1202 21:09:09.027203  307731 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1202 21:09:09.027207  307731 command_runner.go:130] >         },
	I1202 21:09:09.027212  307731 command_runner.go:130] >         "IFName": "lo"
	I1202 21:09:09.027215  307731 command_runner.go:130] >       }
	I1202 21:09:09.027218  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027223  307731 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1202 21:09:09.027227  307731 command_runner.go:130] >     "PluginDirs": [
	I1202 21:09:09.027230  307731 command_runner.go:130] >       "/opt/cni/bin"
	I1202 21:09:09.027234  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027238  307731 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1202 21:09:09.027242  307731 command_runner.go:130] >     "Prefix": "eth"
	I1202 21:09:09.027245  307731 command_runner.go:130] >   },
	I1202 21:09:09.027248  307731 command_runner.go:130] >   "config": {
	I1202 21:09:09.027252  307731 command_runner.go:130] >     "cdiSpecDirs": [
	I1202 21:09:09.027256  307731 command_runner.go:130] >       "/etc/cdi",
	I1202 21:09:09.027259  307731 command_runner.go:130] >       "/var/run/cdi"
	I1202 21:09:09.027263  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027266  307731 command_runner.go:130] >     "cni": {
	I1202 21:09:09.027269  307731 command_runner.go:130] >       "binDir": "",
	I1202 21:09:09.027273  307731 command_runner.go:130] >       "binDirs": [
	I1202 21:09:09.027277  307731 command_runner.go:130] >         "/opt/cni/bin"
	I1202 21:09:09.027280  307731 command_runner.go:130] >       ],
	I1202 21:09:09.027285  307731 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1202 21:09:09.027289  307731 command_runner.go:130] >       "confTemplate": "",
	I1202 21:09:09.027292  307731 command_runner.go:130] >       "ipPref": "",
	I1202 21:09:09.027300  307731 command_runner.go:130] >       "maxConfNum": 1,
	I1202 21:09:09.027304  307731 command_runner.go:130] >       "setupSerially": false,
	I1202 21:09:09.027309  307731 command_runner.go:130] >       "useInternalLoopback": false
	I1202 21:09:09.027312  307731 command_runner.go:130] >     },
	I1202 21:09:09.027321  307731 command_runner.go:130] >     "containerd": {
	I1202 21:09:09.027325  307731 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1202 21:09:09.027330  307731 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1202 21:09:09.027335  307731 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1202 21:09:09.027339  307731 command_runner.go:130] >       "runtimes": {
	I1202 21:09:09.027342  307731 command_runner.go:130] >         "runc": {
	I1202 21:09:09.027347  307731 command_runner.go:130] >           "ContainerAnnotations": null,
	I1202 21:09:09.027351  307731 command_runner.go:130] >           "PodAnnotations": null,
	I1202 21:09:09.027357  307731 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1202 21:09:09.027361  307731 command_runner.go:130] >           "cgroupWritable": false,
	I1202 21:09:09.027365  307731 command_runner.go:130] >           "cniConfDir": "",
	I1202 21:09:09.027370  307731 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1202 21:09:09.027374  307731 command_runner.go:130] >           "io_type": "",
	I1202 21:09:09.027378  307731 command_runner.go:130] >           "options": {
	I1202 21:09:09.027382  307731 command_runner.go:130] >             "BinaryName": "",
	I1202 21:09:09.027386  307731 command_runner.go:130] >             "CriuImagePath": "",
	I1202 21:09:09.027390  307731 command_runner.go:130] >             "CriuWorkPath": "",
	I1202 21:09:09.027394  307731 command_runner.go:130] >             "IoGid": 0,
	I1202 21:09:09.027398  307731 command_runner.go:130] >             "IoUid": 0,
	I1202 21:09:09.027402  307731 command_runner.go:130] >             "NoNewKeyring": false,
	I1202 21:09:09.027407  307731 command_runner.go:130] >             "Root": "",
	I1202 21:09:09.027411  307731 command_runner.go:130] >             "ShimCgroup": "",
	I1202 21:09:09.027415  307731 command_runner.go:130] >             "SystemdCgroup": false
	I1202 21:09:09.027418  307731 command_runner.go:130] >           },
	I1202 21:09:09.027424  307731 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1202 21:09:09.027430  307731 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1202 21:09:09.027434  307731 command_runner.go:130] >           "runtimePath": "",
	I1202 21:09:09.027440  307731 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1202 21:09:09.027444  307731 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1202 21:09:09.027451  307731 command_runner.go:130] >           "snapshotter": ""
	I1202 21:09:09.027455  307731 command_runner.go:130] >         }
	I1202 21:09:09.027458  307731 command_runner.go:130] >       }
	I1202 21:09:09.027461  307731 command_runner.go:130] >     },
	I1202 21:09:09.027470  307731 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1202 21:09:09.027476  307731 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1202 21:09:09.027481  307731 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1202 21:09:09.027485  307731 command_runner.go:130] >     "disableApparmor": false,
	I1202 21:09:09.027490  307731 command_runner.go:130] >     "disableHugetlbController": true,
	I1202 21:09:09.027494  307731 command_runner.go:130] >     "disableProcMount": false,
	I1202 21:09:09.027499  307731 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1202 21:09:09.027503  307731 command_runner.go:130] >     "enableCDI": true,
	I1202 21:09:09.027507  307731 command_runner.go:130] >     "enableSelinux": false,
	I1202 21:09:09.027511  307731 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1202 21:09:09.027515  307731 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1202 21:09:09.027520  307731 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1202 21:09:09.027525  307731 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1202 21:09:09.027529  307731 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1202 21:09:09.027534  307731 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1202 21:09:09.027538  307731 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1202 21:09:09.027544  307731 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027548  307731 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1202 21:09:09.027554  307731 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027558  307731 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1202 21:09:09.027563  307731 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1202 21:09:09.027566  307731 command_runner.go:130] >   },
	I1202 21:09:09.027569  307731 command_runner.go:130] >   "features": {
	I1202 21:09:09.027574  307731 command_runner.go:130] >     "supplemental_groups_policy": true
	I1202 21:09:09.027577  307731 command_runner.go:130] >   },
	I1202 21:09:09.027581  307731 command_runner.go:130] >   "golang": "go1.24.9",
	I1202 21:09:09.027591  307731 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027600  307731 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027604  307731 command_runner.go:130] >   "runtimeHandlers": [
	I1202 21:09:09.027610  307731 command_runner.go:130] >     {
	I1202 21:09:09.027614  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027619  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027623  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027626  307731 command_runner.go:130] >       }
	I1202 21:09:09.027629  307731 command_runner.go:130] >     },
	I1202 21:09:09.027631  307731 command_runner.go:130] >     {
	I1202 21:09:09.027635  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027639  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027644  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027646  307731 command_runner.go:130] >       },
	I1202 21:09:09.027650  307731 command_runner.go:130] >       "name": "runc"
	I1202 21:09:09.027653  307731 command_runner.go:130] >     }
	I1202 21:09:09.027656  307731 command_runner.go:130] >   ],
	I1202 21:09:09.027659  307731 command_runner.go:130] >   "status": {
	I1202 21:09:09.027663  307731 command_runner.go:130] >     "conditions": [
	I1202 21:09:09.027666  307731 command_runner.go:130] >       {
	I1202 21:09:09.027670  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027673  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027677  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027681  307731 command_runner.go:130] >         "type": "RuntimeReady"
	I1202 21:09:09.027685  307731 command_runner.go:130] >       },
	I1202 21:09:09.027688  307731 command_runner.go:130] >       {
	I1202 21:09:09.027694  307731 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1202 21:09:09.027699  307731 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1202 21:09:09.027703  307731 command_runner.go:130] >         "status": false,
	I1202 21:09:09.027707  307731 command_runner.go:130] >         "type": "NetworkReady"
	I1202 21:09:09.027710  307731 command_runner.go:130] >       },
	I1202 21:09:09.027713  307731 command_runner.go:130] >       {
	I1202 21:09:09.027718  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027722  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027726  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027731  307731 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1202 21:09:09.027737  307731 command_runner.go:130] >       }
	I1202 21:09:09.027740  307731 command_runner.go:130] >     ]
	I1202 21:09:09.027743  307731 command_runner.go:130] >   }
	I1202 21:09:09.027746  307731 command_runner.go:130] > }
	I1202 21:09:09.029686  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:09.029710  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:09.029745  307731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:09.029776  307731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:09.029910  307731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:09.029985  307731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:09:09.036886  307731 command_runner.go:130] > kubeadm
	I1202 21:09:09.036909  307731 command_runner.go:130] > kubectl
	I1202 21:09:09.036915  307731 command_runner.go:130] > kubelet
	I1202 21:09:09.037789  307731 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:09.037851  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:09.045467  307731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:09:09.058043  307731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:09:09.070239  307731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:09:09.082241  307731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:09.085795  307731 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:09:09.086355  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:09.208713  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:09.542492  307731 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:09:09.542524  307731 certs.go:195] generating shared ca certs ...
	I1202 21:09:09.542541  307731 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:09.542698  307731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:09:09.542757  307731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:09:09.542770  307731 certs.go:257] generating profile certs ...
	I1202 21:09:09.542908  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:09:09.542989  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:09:09.543042  307731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:09:09.543063  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:09:09.543077  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:09:09.543095  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:09:09.543113  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:09:09.543136  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:09:09.543152  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:09:09.543163  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:09:09.543181  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:09:09.543248  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:09:09.543300  307731 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:09:09.543314  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:09.543356  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:09:09.543389  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:09.543418  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:09.543492  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:09.543552  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.543576  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.543600  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem -> /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.544214  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:09.562449  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:09.579657  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:09.597016  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:09:09.615077  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:09:09.633715  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:09:09.651379  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:09.669401  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:09:09.688777  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:09:09.706718  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:09.724108  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:09:09.741960  307731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:09.754915  307731 ssh_runner.go:195] Run: openssl version
	I1202 21:09:09.760531  307731 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:09:09.760935  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:09:09.769169  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772688  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772981  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.773081  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.818276  307731 command_runner.go:130] > 3ec20f2e
	I1202 21:09:09.818787  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:09:09.826520  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:09.834827  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838656  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838686  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838739  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.879212  307731 command_runner.go:130] > b5213941
	I1202 21:09:09.879657  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:09.887484  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:09:09.895881  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899623  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899669  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899717  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.940074  307731 command_runner.go:130] > 51391683
	I1202 21:09:09.940525  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:09:09.948324  307731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951828  307731 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951867  307731 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:09:09.951875  307731 command_runner.go:130] > Device: 259,1	Inode: 1305405     Links: 1
	I1202 21:09:09.951881  307731 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:09.951888  307731 command_runner.go:130] > Access: 2025-12-02 21:05:02.335914079 +0000
	I1202 21:09:09.951894  307731 command_runner.go:130] > Modify: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951898  307731 command_runner.go:130] > Change: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951903  307731 command_runner.go:130] >  Birth: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951997  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:09:09.992474  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:09.992586  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:09:10.044870  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.045432  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:09:10.090412  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.091042  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:09:10.132690  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.133145  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:09:10.173976  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.174453  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:09:10.215639  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.216098  307731 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:10.216220  307731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:10.216321  307731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:10.242158  307731 cri.go:89] found id: ""
	I1202 21:09:10.242234  307731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:10.249118  307731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:09:10.249140  307731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:09:10.249151  307731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:09:10.250041  307731 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:09:10.250060  307731 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:09:10.250140  307731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:09:10.257350  307731 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:09:10.257790  307731 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.257903  307731 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753958" cluster setting kubeconfig missing "functional-753958" context setting]
	I1202 21:09:10.258244  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.258662  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.258838  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.259364  307731 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:09:10.259381  307731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:09:10.259386  307731 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:09:10.259392  307731 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:09:10.259397  307731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:09:10.259441  307731 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:09:10.259684  307731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:09:10.267575  307731 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:09:10.267606  307731 kubeadm.go:602] duration metric: took 17.540251ms to restartPrimaryControlPlane
	I1202 21:09:10.267616  307731 kubeadm.go:403] duration metric: took 51.535685ms to StartCluster
	I1202 21:09:10.267631  307731 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.267694  307731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.268283  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.268485  307731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:09:10.268816  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:10.268866  307731 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:09:10.268984  307731 addons.go:70] Setting storage-provisioner=true in profile "functional-753958"
	I1202 21:09:10.269003  307731 addons.go:239] Setting addon storage-provisioner=true in "functional-753958"
	I1202 21:09:10.269024  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.269023  307731 addons.go:70] Setting default-storageclass=true in profile "functional-753958"
	I1202 21:09:10.269176  307731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-753958"
	I1202 21:09:10.269690  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.269905  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.274878  307731 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:10.279673  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:10.309974  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.310183  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.310507  307731 addons.go:239] Setting addon default-storageclass=true in "functional-753958"
	I1202 21:09:10.310544  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.311034  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.322713  307731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:10.325707  307731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.325729  307731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:10.325795  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.357829  307731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:10.357850  307731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:10.357914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.371695  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.400329  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.499296  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:10.516631  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.547824  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.294654  307731 node_ready.go:35] waiting up to 6m0s for node "functional-753958" to be "Ready" ...
	I1202 21:09:11.294774  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.294779  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.294839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.295227  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.295315  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295463  307731 retry.go:31] will retry after 210.924688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:11.295364  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295550  307731 retry.go:31] will retry after 203.437895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.500110  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:11.506791  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.578640  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.581915  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.581967  307731 retry.go:31] will retry after 400.592485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595609  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.595676  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595708  307731 retry.go:31] will retry after 422.737023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.794907  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:11.982828  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.018958  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.086246  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.086287  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.086307  307731 retry.go:31] will retry after 564.880189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117100  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.117143  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117191  307731 retry.go:31] will retry after 637.534191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.295409  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.295483  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.295805  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.652365  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.710471  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.710580  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.710622  307731 retry.go:31] will retry after 876.325619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.755731  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.795162  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.795277  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.795599  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.835060  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.835099  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.835118  307731 retry.go:31] will retry after 1.227832404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.295855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.295948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.296269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:13.296338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:13.587806  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:13.646676  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:13.646721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.646742  307731 retry.go:31] will retry after 1.443838067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.795158  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.795236  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.795586  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.064081  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:14.123819  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:14.127173  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.127215  307731 retry.go:31] will retry after 1.221247817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.295601  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.295675  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.295968  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.795874  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.796179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.091734  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:15.151479  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.151525  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.151546  307731 retry.go:31] will retry after 1.850953854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.349587  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:15.413525  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.416721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.416752  307731 retry.go:31] will retry after 1.691274377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.795307  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.795621  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:15.795696  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:16.295456  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.295552  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.295874  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:16.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.795755  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.796091  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.003193  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:17.061077  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.064289  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.064321  307731 retry.go:31] will retry after 2.076549374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.108496  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:17.168660  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.168709  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.168731  307731 retry.go:31] will retry after 3.158627903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.295738  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.295812  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.296081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.794893  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:18.294955  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.295057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.295390  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:18.295447  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:18.795090  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.795156  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.795510  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.141123  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:19.199068  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:19.202437  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.202469  307731 retry.go:31] will retry after 2.729492901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.295833  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.295905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.296241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.795344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:20.295255  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.295325  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.295687  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:20.295737  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:20.327882  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:20.391902  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:20.391939  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.391960  307731 retry.go:31] will retry after 4.367650264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.795532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.795920  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.295837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.295923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.296260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.932718  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:21.990698  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:21.990736  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:21.990761  307731 retry.go:31] will retry after 5.196584204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:22.295359  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.295443  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.295788  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:22.295845  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:22.795464  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.795562  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.795917  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.295669  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.295739  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.296001  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.795753  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.796151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.295815  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.295890  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.296207  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:24.296265  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:24.759924  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:24.795570  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.795642  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.795905  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.817214  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:24.821374  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:24.821411  307731 retry.go:31] will retry after 3.851570628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:25.294967  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.295041  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:25.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.295350  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.295431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.795297  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.795366  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.795685  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:26.795740  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:27.188447  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.254238  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:27.254282  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.254304  307731 retry.go:31] will retry after 6.785596085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.295437  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.295865  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:27.794985  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.795057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.294999  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.295384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.674112  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:28.734788  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:28.734834  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.734853  307731 retry.go:31] will retry after 5.470614597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:29.295607  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.295683  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.296024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:29.296105  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:29.794837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.795239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.295136  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.295232  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.295003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.795007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.795289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:31.795338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:32.295580  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.295653  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.295944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:32.795804  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.796241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.294972  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.295049  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.794828  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.794899  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.795152  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:34.040709  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:34.103827  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.103870  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.103890  307731 retry.go:31] will retry after 13.233422448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.206146  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:34.265937  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.265992  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.266011  307731 retry.go:31] will retry after 9.178751123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.295270  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.295377  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.295751  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:34.295808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:34.795590  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.795669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.295449  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.295792  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.795609  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.795985  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.295235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.795205  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.795285  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.795563  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:36.795617  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:37.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:37.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.294875  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.295216  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.794960  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:39.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.295474  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:39.295528  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:39.795755  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.795827  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.796097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.295759  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.295831  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.296122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.794848  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.794921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.795244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.294965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.295255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.794953  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.795034  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:41.795415  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:42.295127  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.295208  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.295661  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:42.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.795105  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.294952  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.295026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.445783  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:43.508150  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:43.508187  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.508208  307731 retry.go:31] will retry after 18.255533178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.795638  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.795730  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:43.796132  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:44.295329  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.295407  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.295673  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:44.795488  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.795564  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.795884  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.295740  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.295822  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.795177  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:46.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.295418  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:46.295474  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:46.795131  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.795532  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.295250  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.295339  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.295611  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.337905  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:47.398412  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:47.398459  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.398478  307731 retry.go:31] will retry after 28.802230035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.794980  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.795304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:48.795347  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:49.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.295302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:49.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.295374  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.295672  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.795457  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.795527  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.795850  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:50.795908  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:51.295904  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.295977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:51.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.794969  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.795305  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.295341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:53.295540  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.295885  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:53.295930  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:53.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.295722  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.296147  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.795496  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:55.295532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.295606  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:55.295984  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:55.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.795835  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.796153  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.294998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.295324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.295124  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.295200  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.295537  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.795227  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.795291  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.795605  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:57.795689  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:58.295413  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.295489  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.295818  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:58.795617  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.295296  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.295368  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.295623  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.794983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.795300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:00.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.295398  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.295706  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:00.295756  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:00.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.795832  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.796237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.295081  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.295430  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.763971  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:01.794859  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.794929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.795196  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.835916  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:01.839908  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:01.839940  307731 retry.go:31] will retry after 30.677466671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:02.295717  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.295826  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.296209  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:02.296289  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:02.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.795406  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.295176  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.295453  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.794940  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.795026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.795356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.295114  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.295196  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.295536  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:04.796171  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:05.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.294934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.295264  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:05.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.295514  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.295601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.295881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.795664  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.795741  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.796081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:07.294800  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.294876  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:07.295261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:07.795446  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.795518  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.795780  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.295543  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.295937  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.795803  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.795884  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.796321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:09.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:09.295304  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:09.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.795434  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.295294  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.295369  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.295705  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.795493  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.795577  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.795953  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:11.295781  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.295870  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:11.296268  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:11.794950  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.795368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.295128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.294954  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.795197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:13.795238  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:14.294945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.295037  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.295425  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:14.795147  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.795224  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.795562  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.295257  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.295338  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.295612  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:15.795380  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:16.200937  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:16.256562  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:16.259927  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.259959  307731 retry.go:31] will retry after 18.923209073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.295107  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.295189  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.295558  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:16.794811  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.295260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.795031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:18.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.295206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:18.295258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:18.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.295370  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.794970  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:20.295271  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.295345  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.295682  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:20.295746  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:20.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.795586  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.795908  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.295457  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.295714  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.795556  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.795949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:22.295726  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.296133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:22.296198  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:22.795455  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.795537  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.795801  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.295679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.296049  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.795807  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.796143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.294826  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.294902  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.295188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.795928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.796234  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:24.796284  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:25.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.294948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:25.794855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.795034  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.795438  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:27.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:27.295395  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:27.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.794984  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:29.294923  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:29.295418  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:29.795094  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.795169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.795504  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.295472  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.295550  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.295841  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.795750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.796084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.294839  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.295203  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.794808  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:31.795189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:32.294872  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:32.517612  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:32.588466  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591823  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591933  307731 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:10:32.795459  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.795532  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.795852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.295131  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.295202  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.794891  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.794962  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:33.795314  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:34.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:34.795003  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.795074  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.795374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.183965  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:35.239016  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:35.242188  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.242221  307731 retry.go:31] will retry after 25.961571555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.295555  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.295639  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.295975  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.796134  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:35.796175  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:36.295019  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.295091  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.295347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:36.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.795019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.295060  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.795743  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.795817  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:38.295875  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.295951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.296303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:38.296363  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:38.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.295633  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.295705  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.795820  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.795894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:40.295848  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.295936  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.296337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:40.296429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:40.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.795169  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.294994  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.795377  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.295192  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.795316  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.795641  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:42.795694  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:43.295520  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.295594  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:43.795644  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.795714  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.795981  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.295768  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.295846  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.296173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.794885  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:45.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:45.295340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:45.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.295077  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.295153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.795257  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.795513  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.795380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:47.795437  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:48.295512  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.295579  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.295842  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:48.795623  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.795698  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.295731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.296154  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.795443  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.795545  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.795873  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:49.795941  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:50.295652  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.295726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.296078  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:50.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.795808  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.796159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.295466  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.295534  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.295787  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.795602  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.795679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.796007  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:51.796073  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:52.295850  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:52.794970  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.795045  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.294905  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.794897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.794971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:54.295102  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.295441  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:54.295539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:54.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.295052  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.795785  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.795851  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.796131  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.295386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.795230  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.795573  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:56.795626  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:57.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.294906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.295200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:57.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.795292  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.294897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.294972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.795026  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.795092  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:59.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.295353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:59.295412  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:59.795027  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.795102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.795393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.298236  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.298341  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.298735  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.795120  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.795194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:01.204061  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:11:01.267039  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267090  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267174  307731 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:11:01.270170  307731 out.go:179] * Enabled addons: 
	I1202 21:11:01.273921  307731 addons.go:530] duration metric: took 1m51.005043213s for enable addons: enabled=[]
	I1202 21:11:01.295263  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.295359  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.295653  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:01.295706  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:01.795541  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.795971  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.295791  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.295861  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.294886  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.795031  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.795108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.795398  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:03.795445  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:04.295135  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.295207  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.295489  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:04.797772  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.797855  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.798166  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.795024  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.795114  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.795840  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:05.795891  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:06.295375  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.295699  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:06.795562  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.795637  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.295853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.795390  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.795462  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.795723  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:08.295538  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.295622  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.295961  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:08.296019  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:08.795765  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.795839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.796212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.295353  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.295424  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.295732  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.795220  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.795301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.795760  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:10.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.295830  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:10.296275  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:10.794847  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.794927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.795204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.295063  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.295142  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.295478  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.795260  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.795582  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.294899  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.295257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:12.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:13.295067  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.295150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.295484  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:13.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:15.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.295024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:15.295317  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:15.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.794897  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.295258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:17.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:18.294837  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.295270  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:18.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.295034  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.295446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.795197  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.795502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:19.795550  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:20.295498  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.295582  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.295890  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:20.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.795745  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.796070  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.294797  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.294862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.295106  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.795856  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.795927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.796206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:21.796258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.295002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.295336  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:22.795444  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.795821  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.295637  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.295716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.296030  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.795816  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.795911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.796220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:24.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.295400  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:24.295449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:24.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.795056  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.294949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.295023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.295327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.795726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.795991  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.294874  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.295297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.795064  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:26.795449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:27.295101  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.295170  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.295451  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:27.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.795354  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.795573  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.795646  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:28.795938  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:29.295736  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.296135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:29.794877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.295169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:31.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:31.295398  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:31.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.795188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.294898  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.295108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:33.795429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:34.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.294989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:34.795011  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.795087  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.795342  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.295337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.795066  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.795146  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.795473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:35.795529  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:36.295315  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.295394  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.295654  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:36.795469  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.795546  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.795881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.295777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.296183  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.795356  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.795431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.795698  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:37.795750  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:38.295449  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.295517  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:38.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.795731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.295366  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.295436  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.295758  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.795668  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:39.796055  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:40.294852  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.295284  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:40.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.794934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.795237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.295084  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.295163  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.295481  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:42.295575  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.295656  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.295978  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:42.296030  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:42.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.795869  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.796202  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.295844  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.295922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.296257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.795435  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.795509  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.795804  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:44.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.295700  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.296029  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:44.296112  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:44.794813  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.794887  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.295025  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.295309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.295180  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.295255  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.295594  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.795733  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.795806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:46.796126  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:47.294799  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.294879  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.295242  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:47.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.795217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.295217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.795020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:49.294951  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.295028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:49.295407  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:49.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.795752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.796093  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.295858  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.296181  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.795022  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.795327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.294892  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.294961  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.795369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:51.795425  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:52.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.295183  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:52.795678  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.796004  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.295812  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.295892  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.296208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.795862  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.795942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.796296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:53.796344  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:54.294832  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.295145  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:54.794887  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.794967  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.795291  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.294921  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.295281  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.795485  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.795558  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.795809  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:56.295721  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.295797  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.296098  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:56.296148  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:56.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.295605  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.295673  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.295938  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.795802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.796121  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.294836  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.294913  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.794825  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.794896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:58.795190  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:59.294877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.295267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:59.794990  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.795067  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.795410  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.308704  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.308789  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.309104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.794956  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:00.795332  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:01.294968  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.295473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:01.794941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.294948  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.295043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:02.795477  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:03.295203  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.295281  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.295626  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:03.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.795507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.795802  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.295252  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.295319  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.295618  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.795525  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.795601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:04.796063  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:05.295831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.295911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:05.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.794932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.795240  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.294953  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.295030  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.295362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.795000  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.795417  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:07.294864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.295204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:07.295255  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:07.794954  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.295005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.795520  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.795777  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:09.295573  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.295651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.295959  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:09.296007  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:09.795632  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.795716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.295748  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.295818  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.795857  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.795938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.796244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.294935  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.795294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:11.795346  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:12.294901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.294921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.295173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:13.795372  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:14.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:14.795456  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.795525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.795875  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.295665  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.295750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.296104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.795269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:16.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.294937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.295192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:16.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:16.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.795044  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.295380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.796052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:18.295868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.295939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.296239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:18.296278  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:18.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.794946  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.795296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.295053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.794942  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.295100  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.795156  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.795229  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:20.795539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:21.295446  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.295525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.295851  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:21.795679  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.795753  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.796086  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.295843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.295924  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.296179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.794901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:23.294961  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.295032  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.295330  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:23.295374  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:23.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.795578  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.795879  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.295535  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.295609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.295949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.795766  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.795904  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.796239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.294881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.295138  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.794905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.795241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:25.795296  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:26.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:26.795516  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.795596  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.795868  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.295674  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.295752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.795851  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.795930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.796225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:27.796269  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:28.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.295262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:28.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.294980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.295344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.795043  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.795431  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:30.295491  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.295854  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:30.295900  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:30.795622  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.795701  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.295224  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.295021  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.295094  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.295426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.795261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:32.795313  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:33.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.294988  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.295274  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:33.794936  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.294978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.295357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:34.795369  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:35.294958  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:35.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.795262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.294964  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.295312  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.795319  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:37.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.294923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:37.295279  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:37.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.795062  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.295093  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.295216  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.295502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.795751  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.795829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.796083  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.295189  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.794951  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:39.795360  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:40.295208  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.295278  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.295547  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:40.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.795303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.295146  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.295226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.295541  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.795842  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.795912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.796200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:41.796251  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:42.294960  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.295487  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:42.795065  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.795138  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.295783  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.794833  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:44.294965  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.295055  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.295393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:44.295450  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:44.794874  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.794942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.295214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.295767  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.795233  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.795311  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.795638  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:46.295163  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.295245  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.295588  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:46.295650  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:46.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.295090  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.295497  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.794869  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.294861  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.295271  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.794940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:48.795342  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:49.294843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.294911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.295164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:49.794846  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.794949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.295060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.795576  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.795648  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.795900  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:50.795939  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:51.295852  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.295925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.296265  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:51.794918  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.795350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.294960  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.295280  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.795353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:53.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.295126  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.295420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:53.295466  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:53.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.794894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.295276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.795118  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.294835  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.294908  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.295159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.794871  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.794955  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:55.795414  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:56.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.295014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.295303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:56.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.794965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:58.294803  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.294871  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.295161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:58.295224  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:58.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.795009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.795298  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.295027  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.295104  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.794850  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.795190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:00.295830  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.295907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.296237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:00.296286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:00.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.295254  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.795411  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.795414  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.795493  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:02.795808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:03.295626  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.295706  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.296056  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:03.795867  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.795947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.796294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.294876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.295212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.794889  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.795297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:05.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.295111  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.295416  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:05.295461  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:05.795108  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.295448  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.295528  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.296185  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.794905  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.795346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:07.295651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.295719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.296051  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:07.296110  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:07.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.795926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.796263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.294869  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.795548  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.795627  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.795895  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:09.295682  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.295756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.296097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:09.296151  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:09.794843  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.795258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.295332  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.295413  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.795553  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.796008  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:11.295865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.295935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.296253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:11.296301  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:11.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.796123  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.795041  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.795119  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.795456  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.295760  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.296010  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.795805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.796135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:13.796187  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:14.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:14.795004  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.795086  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.794965  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.795420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:16.294820  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:16.295299  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:16.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.795324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.795483  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.795554  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.795826  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:18.295595  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.295669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.296052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:18.296108  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:18.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.795799  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.796125  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.295390  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.295507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.295770  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.795944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:20.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.295849  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.296214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:20.296270  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:20.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.795888  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.295858  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.296299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.795128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.795467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.295167  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.295249  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:22.795386  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:23.294912  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.295388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:23.795649  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.795757  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.295857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.295930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.296228  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.794835  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.795214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:25.294914  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.295261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:25.295309  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:25.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.795129  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.795387  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:27.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.295010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:27.295406  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:27.795057  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.795135  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.294926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.295180  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.795601  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.795676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.796027  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:29.295836  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.295912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.296231  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:29.296292  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:29.794829  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.794900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.795151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.295730  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.296126  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.794915  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.795249  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:31.297776  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.297853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.298178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:31.298228  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:31.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.295433  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.795735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.795800  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.796165  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.295304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.795016  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.795321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:33.795370  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:34.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.295184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:34.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.794945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.294879  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.294959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.794925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.795178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:36.294917  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.294991  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:36.295373  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:36.795049  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.295735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.295805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.296066  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.795800  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.795873  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.796213  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:38.295712  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.295790  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.296136  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:38.296189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:38.794842  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.795163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.294845  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.294918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.295048  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.295117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.794900  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:40.795390  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:41.294896  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.294974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.295282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:41.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.795060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.795375  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.295106  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.295194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.794935  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.795335  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:43.294846  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.294916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.295163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:43.295211  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:43.794883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.795651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.795720  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.795982  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:45.295757  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.295838  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.296285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:45.296345  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:45.795035  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.795117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.795459  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.295269  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.295336  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.794865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.795193  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:47.795233  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:48.294918  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:48.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.795165  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.795501  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.295207  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.295288  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.295554  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.795252  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.795322  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.795632  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:49.795684  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:50.295531  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.295604  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.295957  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:50.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.795608  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.796073  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.295825  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.296243  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.795338  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:52.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.294945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.295299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:52.295371  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:52.794884  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.295056  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.295127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.295442  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.794868  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.795222  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.294840  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.294920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.795316  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:54.795366  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:55.295563  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.295904  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:55.795697  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.795777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.796113  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.294907  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.794944  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.795238  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:57.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.295369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:57.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:57.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.795051  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.294806  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.294875  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.295122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.794910  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.795229  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.295361  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.795054  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.795386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:59.795436  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:00.295787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.295877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:00.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.795357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.294857  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.295226  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.795098  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.795437  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:01.795499  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:02.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.295413  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:02.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.795756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.796018  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.295834  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.295906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.296221  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:04.295602  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.295676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.296005  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:04.296061  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:04.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.795363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.295123  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.295457  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.795144  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.295374  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.295743  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.795004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.795340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:06.795401  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:07.295611  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.295678  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:07.795672  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.795746  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.796102  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.295447  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.295852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.795225  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.795296  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.795548  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:08.795589  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:09.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.295329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:09.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.295213  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.295283  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.295555  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.794913  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.794989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.795326  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:11.294894  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.294973  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.295333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:11.295391  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:11.794858  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.794926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.795184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.794998  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.795409  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:13.295664  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.295731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:13.296034  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:13.795754  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.294862  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.795651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.795954  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:15.295756  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.295834  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.296219  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:15.296293  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.794990  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.294940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.795371  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.295083  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.295533  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.795809  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.795877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.796133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:17.796172  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:18.294860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:18.794860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.295036  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.295289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.794925  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.795302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:20.295739  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.296151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:20.296213  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:20.795440  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.795763  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.295657  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.295765  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.296103  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.795787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.795862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.796230  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.294978  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:22.795368  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:23.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.295000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.295323  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:23.795654  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.795728  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.795993  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.295761  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.295839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.296161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.794986  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:25.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.294935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.295190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:25.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:25.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.295020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.295383  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.795713  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.795787  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.796101  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:27.294827  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.294901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:27.295286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:27.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.794916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.294946  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.295278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.795366  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:29.295050  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:29.295536  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:29.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.795842  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.796119  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.295026  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.295100  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.295424  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.795136  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.795210  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:31.295356  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.295420  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.295666  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:31.295705  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:31.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.795523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.295545  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.295621  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.295915  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.795216  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.795294  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.795032  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.795113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.795460  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:33.795521  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:34.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.295175  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:34.794878  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.794952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.795309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.295020  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.295113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.295444  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.795727  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.795796  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.796110  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:35.796169  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:36.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.295256  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:36.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.795012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.794972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:38.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:38.295411  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:38.795081  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.795152  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.295318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.795068  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:40.295494  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:40.295880  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:40.795619  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.795692  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.796024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.795647  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.795719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:42.295806  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.295896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.296282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:42.296340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:42.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.795349  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.295078  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.295167  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.295472  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.794967  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.795039  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.295072  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.295155  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.795154  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.795226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.795482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:44.795526  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:45.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.295173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.295911  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:45.795805  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.796164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.795381  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:47.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:47.295354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:47.794995  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.795072  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.294900  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.794879  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.794954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.294865  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.795293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:49.795354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:50.295309  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.295381  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.295715  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:50.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.295008  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.295359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.795069  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:51.795574  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:52.295227  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.295298  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.295552  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:52.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.295449  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.795218  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:54.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:54.295377  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:54.795076  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.795150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.295170  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.295241  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.295544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:56.295064  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.295148  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.295496  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:56.295551  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:56.795788  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.795881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.796235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.295029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.795157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.795491  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:58.295762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.295829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.296084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:58.296124  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:58.795828  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.795901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.796192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.294890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.294971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.795662  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.294841  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.295288  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:00.795520  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:01.295565  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:01.795669  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.794984  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.795058  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.795384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:03.294942  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:03.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:03.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.295082  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.295157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.295429  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.795043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.795426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.795116  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.795195  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.795515  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:05.795575  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:06.295370  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.295451  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.295771  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:06.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.795617  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.795962  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.295701  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.295775  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.296023  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.795795  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.795872  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:07.796261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:08.294940  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:08.794862  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.794931  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.295352  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.795162  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.795514  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:10.299197  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.299301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.299703  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:10.299761  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:10.795524  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.795615  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:11.294844  307731 node_ready.go:38] duration metric: took 6m0.000140797s for node "functional-753958" to be "Ready" ...
	I1202 21:15:11.298019  307731 out.go:203] 
	W1202 21:15:11.300907  307731 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:15:11.300927  307731 out.go:285] * 
	W1202 21:15:11.303086  307731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:15:11.306181  307731 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831249053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831259104Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831273446Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831285557Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831297339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831308793Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831326376Z" level=info msg="runtime interface created"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831333055Z" level=info msg="created NRI interface"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831343081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831374530Z" level=info msg="Connect containerd service"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831645996Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.832751142Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.842854029Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843061325Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843124724Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843068282Z" level=info msg="Start recovering state"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861144221Z" level=info msg="Start event monitor"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861331530Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861398235Z" level=info msg="Start streaming server"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861467148Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861523835Z" level=info msg="runtime interface starting up..."
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861585675Z" level=info msg="starting plugins..."
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861664368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:09:08 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.864406089Z" level=info msg="containerd successfully booted in 0.055350s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:15:13.069520    9032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:13.070301    9032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:13.071965    9032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:13.072585    9032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:13.074293    9032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:15:13 up  2:57,  0 user,  load average: 0.29, 0.35, 0.88
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:15:09 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:10 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 807.
	Dec 02 21:15:10 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:10 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:10 functional-753958 kubelet[8920]: E1202 21:15:10.342102    8920 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:10 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:10 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 808.
	Dec 02 21:15:11 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:11 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:11 functional-753958 kubelet[8925]: E1202 21:15:11.090287    8925 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 809.
	Dec 02 21:15:11 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:11 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:11 functional-753958 kubelet[8930]: E1202 21:15:11.882226    8930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 02 21:15:12 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:12 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:12 functional-753958 kubelet[8951]: E1202 21:15:12.601622    8951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (398.534952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-753958 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-753958 get po -A: exit status 1 (54.155636ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-753958 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-753958 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-753958 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (300.420248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/2632412.pem                                                                                                       │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /usr/share/ca-certificates/2632412.pem                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh sudo cat /etc/test/nested/copy/263241/hosts                                                                                               │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save kicbase/echo-server:functional-446665 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image rm kicbase/echo-server:functional-446665 --alsologtostderr                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ update-context │ functional-446665 update-context --alsologtostderr -v=2                                                                                                         │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image save --daemon kicbase/echo-server:functional-446665 --alsologtostderr                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format yaml --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format short --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format json --alsologtostderr                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls --format table --alsologtostderr                                                                                                     │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh            │ functional-446665 ssh pgrep buildkitd                                                                                                                           │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image          │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                          │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image          │ functional-446665 image ls                                                                                                                                      │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete         │ -p functional-446665                                                                                                                                            │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start          │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start          │ -p functional-753958 --alsologtostderr -v=8                                                                                                                     │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:09:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:09:05.869127  307731 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:09:05.869342  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.869372  307731 out.go:374] Setting ErrFile to fd 2...
	I1202 21:09:05.869392  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.870120  307731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:09:05.870642  307731 out.go:368] Setting JSON to false
	I1202 21:09:05.871532  307731 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10284,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:09:05.871698  307731 start.go:143] virtualization:  
	I1202 21:09:05.875240  307731 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:09:05.878196  307731 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:09:05.878269  307731 notify.go:221] Checking for updates...
	I1202 21:09:05.884072  307731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:09:05.886942  307731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:05.889899  307731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:09:05.892813  307731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:09:05.895771  307731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:09:05.899217  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:05.899365  307731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:09:05.932799  307731 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:09:05.932919  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:05.993966  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:05.984741651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:05.994072  307731 docker.go:319] overlay module found
	I1202 21:09:05.997248  307731 out.go:179] * Using the docker driver based on existing profile
	I1202 21:09:06.000038  307731 start.go:309] selected driver: docker
	I1202 21:09:06.000060  307731 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.000154  307731 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:09:06.000264  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:06.066709  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:06.057768194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:06.067144  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:06.067209  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:06.067263  307731 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.070421  307731 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:09:06.073261  307731 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:09:06.078117  307731 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:09:06.080953  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:06.081041  307731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:09:06.101516  307731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:09:06.101541  307731 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:09:06.138751  307731 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:09:06.314468  307731 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:09:06.314628  307731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:09:06.314753  307731 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314852  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:09:06.314868  307731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.02µs
	I1202 21:09:06.314884  307731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:09:06.314900  307731 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314935  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:09:06.314945  307731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.735µs
	I1202 21:09:06.314952  307731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:09:06.314968  307731 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315000  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:09:06.315009  307731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.764µs
	I1202 21:09:06.315016  307731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315030  307731 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315059  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:09:06.315069  307731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.875µs
	I1202 21:09:06.315075  307731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315089  307731 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315119  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:09:06.315127  307731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.629µs
	I1202 21:09:06.315144  307731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315143  307731 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:09:06.315177  307731 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315202  307731 start.go:364] duration metric: took 13.3µs to acquireMachinesLock for "functional-753958"
	I1202 21:09:06.315219  307731 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:09:06.315230  307731 fix.go:54] fixHost starting: 
	I1202 21:09:06.315183  307731 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315307  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:09:06.315332  307731 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 153.571µs
	I1202 21:09:06.315357  307731 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:09:06.315387  307731 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315443  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:09:06.315465  307731 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.424µs
	I1202 21:09:06.315488  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:06.315527  307731 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315588  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:09:06.315619  307731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 95.488µs
	I1202 21:09:06.315640  307731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:09:06.315489  307731 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:09:06.315801  307731 cache.go:87] Successfully saved all images to host disk.
	I1202 21:09:06.333736  307731 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:09:06.333771  307731 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:09:06.337175  307731 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:09:06.337206  307731 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:06.337301  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.354474  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.354810  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.354830  307731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:06.501197  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.501220  307731 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:09:06.501288  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.519375  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.519710  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.519727  307731 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:09:06.687724  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.687814  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.707419  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.707758  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.707780  307731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:06.858340  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:06.858365  307731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:09:06.858387  307731 ubuntu.go:190] setting up certificates
	I1202 21:09:06.858407  307731 provision.go:84] configureAuth start
	I1202 21:09:06.858472  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:06.877925  307731 provision.go:143] copyHostCerts
	I1202 21:09:06.877980  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878020  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:09:06.878036  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878121  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:09:06.878219  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878244  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:09:06.878253  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878283  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:09:06.878341  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878361  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:09:06.878366  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878392  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:09:06.878454  307731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:09:07.212788  307731 provision.go:177] copyRemoteCerts
	I1202 21:09:07.212871  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:07.212914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.229990  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.334622  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:09:07.334690  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:09:07.358156  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:09:07.358212  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:07.374829  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:09:07.374936  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:09:07.391856  307731 provision.go:87] duration metric: took 533.420534ms to configureAuth
	I1202 21:09:07.391883  307731 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:07.392075  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:07.392088  307731 machine.go:97] duration metric: took 1.054874904s to provisionDockerMachine
	I1202 21:09:07.392096  307731 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:09:07.392108  307731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:07.392158  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:07.392201  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.409892  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.513929  307731 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:07.517313  307731 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:09:07.517377  307731 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:09:07.517399  307731 command_runner.go:130] > VERSION_ID="12"
	I1202 21:09:07.517411  307731 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:09:07.517423  307731 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:09:07.517428  307731 command_runner.go:130] > ID=debian
	I1202 21:09:07.517432  307731 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:09:07.517437  307731 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:09:07.517460  307731 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:09:07.517505  307731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:07.517555  307731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:07.517574  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:09:07.517638  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:09:07.517741  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:09:07.517755  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /etc/ssl/certs/2632412.pem
	I1202 21:09:07.517830  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:09:07.517839  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> /etc/test/nested/copy/263241/hosts
	I1202 21:09:07.517882  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:09:07.525639  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:07.543648  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:09:07.560944  307731 start.go:296] duration metric: took 168.831988ms for postStartSetup
	I1202 21:09:07.561067  307731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:07.561116  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.579622  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.682695  307731 command_runner.go:130] > 12%
	I1202 21:09:07.682778  307731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:07.687210  307731 command_runner.go:130] > 172G
	I1202 21:09:07.687707  307731 fix.go:56] duration metric: took 1.372471826s for fixHost
	I1202 21:09:07.687729  307731 start.go:83] releasing machines lock for "functional-753958", held for 1.372515567s
	I1202 21:09:07.687799  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:07.704780  307731 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:07.704833  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.704860  307731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:07.704931  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.726613  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.737148  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.829144  307731 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:09:07.829307  307731 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:07.919742  307731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:09:07.919788  307731 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:09:07.919811  307731 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:09:07.919883  307731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:09:07.924332  307731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:09:07.924495  307731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:07.924590  307731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:07.932451  307731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:09:07.932475  307731 start.go:496] detecting cgroup driver to use...
	I1202 21:09:07.932505  307731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:07.932553  307731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:09:07.947902  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:09:07.964330  307731 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:07.964400  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:07.980760  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:07.995134  307731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:08.122567  307731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:08.232585  307731 docker.go:234] disabling docker service ...
	I1202 21:09:08.232660  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:08.247806  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:08.260075  307731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:08.380227  307731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:08.498586  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:08.511975  307731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:08.525630  307731 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1202 21:09:08.525792  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:09:08.534331  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:09:08.543412  307731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:09:08.543534  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:09:08.552561  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.561268  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:09:08.570127  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.578716  307731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:08.586804  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:09:08.595543  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:09:08.604412  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:09:08.613462  307731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:08.620008  307731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:09:08.621008  307731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:08.628262  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:08.744391  307731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:09:08.864675  307731 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:09:08.864794  307731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:09:08.868351  307731 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1202 21:09:08.868411  307731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:09:08.868454  307731 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1202 21:09:08.868480  307731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:08.868521  307731 command_runner.go:130] > Access: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868544  307731 command_runner.go:130] > Modify: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868569  307731 command_runner.go:130] > Change: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868599  307731 command_runner.go:130] >  Birth: -
	I1202 21:09:08.868892  307731 start.go:564] Will wait 60s for crictl version
	I1202 21:09:08.868989  307731 ssh_runner.go:195] Run: which crictl
	I1202 21:09:08.872054  307731 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:09:08.872553  307731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:08.897996  307731 command_runner.go:130] > Version:  0.1.0
	I1202 21:09:08.898089  307731 command_runner.go:130] > RuntimeName:  containerd
	I1202 21:09:08.898120  307731 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1202 21:09:08.898152  307731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:09:08.900685  307731 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:09:08.900802  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.918917  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.920319  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.938561  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.945896  307731 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:09:08.948895  307731 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:08.964797  307731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:08.968415  307731 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:09:08.968697  307731 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:08.968812  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:08.968871  307731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:08.989960  307731 command_runner.go:130] > {
	I1202 21:09:08.989978  307731 command_runner.go:130] >   "images":  [
	I1202 21:09:08.989982  307731 command_runner.go:130] >     {
	I1202 21:09:08.989991  307731 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:09:08.989996  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990002  307731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:09:08.990005  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990009  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990013  307731 command_runner.go:130] >       "size":  "8032639",
	I1202 21:09:08.990018  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990022  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990025  307731 command_runner.go:130] >     },
	I1202 21:09:08.990027  307731 command_runner.go:130] >     {
	I1202 21:09:08.990039  307731 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:09:08.990044  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990049  307731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:09:08.990052  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990057  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990066  307731 command_runner.go:130] >       "size":  "21166088",
	I1202 21:09:08.990071  307731 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:09:08.990075  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990078  307731 command_runner.go:130] >     },
	I1202 21:09:08.990085  307731 command_runner.go:130] >     {
	I1202 21:09:08.990092  307731 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:09:08.990096  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990101  307731 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:09:08.990104  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990108  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990112  307731 command_runner.go:130] >       "size":  "21134420",
	I1202 21:09:08.990116  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990120  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990123  307731 command_runner.go:130] >       },
	I1202 21:09:08.990126  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990130  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990133  307731 command_runner.go:130] >     },
	I1202 21:09:08.990136  307731 command_runner.go:130] >     {
	I1202 21:09:08.990143  307731 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:09:08.990147  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990156  307731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:09:08.990159  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990163  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990167  307731 command_runner.go:130] >       "size":  "24676285",
	I1202 21:09:08.990170  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990175  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990178  307731 command_runner.go:130] >       },
	I1202 21:09:08.990182  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990189  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990192  307731 command_runner.go:130] >     },
	I1202 21:09:08.990195  307731 command_runner.go:130] >     {
	I1202 21:09:08.990202  307731 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:09:08.990206  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990213  307731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:09:08.990216  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990220  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990224  307731 command_runner.go:130] >       "size":  "20658969",
	I1202 21:09:08.990227  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990231  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990233  307731 command_runner.go:130] >       },
	I1202 21:09:08.990237  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990241  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990244  307731 command_runner.go:130] >     },
	I1202 21:09:08.990246  307731 command_runner.go:130] >     {
	I1202 21:09:08.990253  307731 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:09:08.990257  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990262  307731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:09:08.990265  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990269  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990273  307731 command_runner.go:130] >       "size":  "22428165",
	I1202 21:09:08.990277  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990280  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990283  307731 command_runner.go:130] >     },
	I1202 21:09:08.990287  307731 command_runner.go:130] >     {
	I1202 21:09:08.990293  307731 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:09:08.990297  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990302  307731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:09:08.990305  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990314  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990318  307731 command_runner.go:130] >       "size":  "15389290",
	I1202 21:09:08.990322  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990329  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990332  307731 command_runner.go:130] >       },
	I1202 21:09:08.990336  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990339  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990342  307731 command_runner.go:130] >     },
	I1202 21:09:08.990345  307731 command_runner.go:130] >     {
	I1202 21:09:08.990352  307731 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:09:08.990356  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990361  307731 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:09:08.990364  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990371  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990375  307731 command_runner.go:130] >       "size":  "265458",
	I1202 21:09:08.990379  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990383  307731 command_runner.go:130] >         "value":  "65535"
	I1202 21:09:08.990386  307731 command_runner.go:130] >       },
	I1202 21:09:08.990389  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990393  307731 command_runner.go:130] >       "pinned":  true
	I1202 21:09:08.990396  307731 command_runner.go:130] >     }
	I1202 21:09:08.990402  307731 command_runner.go:130] >   ]
	I1202 21:09:08.990404  307731 command_runner.go:130] > }
	I1202 21:09:08.992021  307731 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:09:08.992044  307731 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:08.992052  307731 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:09:08.992155  307731 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:08.992222  307731 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:09:09.027109  307731 command_runner.go:130] > {
	I1202 21:09:09.027127  307731 command_runner.go:130] >   "cniconfig": {
	I1202 21:09:09.027132  307731 command_runner.go:130] >     "Networks": [
	I1202 21:09:09.027136  307731 command_runner.go:130] >       {
	I1202 21:09:09.027142  307731 command_runner.go:130] >         "Config": {
	I1202 21:09:09.027146  307731 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1202 21:09:09.027151  307731 command_runner.go:130] >           "Name": "cni-loopback",
	I1202 21:09:09.027155  307731 command_runner.go:130] >           "Plugins": [
	I1202 21:09:09.027164  307731 command_runner.go:130] >             {
	I1202 21:09:09.027168  307731 command_runner.go:130] >               "Network": {
	I1202 21:09:09.027172  307731 command_runner.go:130] >                 "ipam": {},
	I1202 21:09:09.027178  307731 command_runner.go:130] >                 "type": "loopback"
	I1202 21:09:09.027181  307731 command_runner.go:130] >               },
	I1202 21:09:09.027186  307731 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1202 21:09:09.027189  307731 command_runner.go:130] >             }
	I1202 21:09:09.027193  307731 command_runner.go:130] >           ],
	I1202 21:09:09.027203  307731 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1202 21:09:09.027207  307731 command_runner.go:130] >         },
	I1202 21:09:09.027212  307731 command_runner.go:130] >         "IFName": "lo"
	I1202 21:09:09.027215  307731 command_runner.go:130] >       }
	I1202 21:09:09.027218  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027223  307731 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1202 21:09:09.027227  307731 command_runner.go:130] >     "PluginDirs": [
	I1202 21:09:09.027230  307731 command_runner.go:130] >       "/opt/cni/bin"
	I1202 21:09:09.027234  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027238  307731 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1202 21:09:09.027242  307731 command_runner.go:130] >     "Prefix": "eth"
	I1202 21:09:09.027245  307731 command_runner.go:130] >   },
	I1202 21:09:09.027248  307731 command_runner.go:130] >   "config": {
	I1202 21:09:09.027252  307731 command_runner.go:130] >     "cdiSpecDirs": [
	I1202 21:09:09.027256  307731 command_runner.go:130] >       "/etc/cdi",
	I1202 21:09:09.027259  307731 command_runner.go:130] >       "/var/run/cdi"
	I1202 21:09:09.027263  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027266  307731 command_runner.go:130] >     "cni": {
	I1202 21:09:09.027269  307731 command_runner.go:130] >       "binDir": "",
	I1202 21:09:09.027273  307731 command_runner.go:130] >       "binDirs": [
	I1202 21:09:09.027277  307731 command_runner.go:130] >         "/opt/cni/bin"
	I1202 21:09:09.027280  307731 command_runner.go:130] >       ],
	I1202 21:09:09.027285  307731 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1202 21:09:09.027289  307731 command_runner.go:130] >       "confTemplate": "",
	I1202 21:09:09.027292  307731 command_runner.go:130] >       "ipPref": "",
	I1202 21:09:09.027300  307731 command_runner.go:130] >       "maxConfNum": 1,
	I1202 21:09:09.027304  307731 command_runner.go:130] >       "setupSerially": false,
	I1202 21:09:09.027309  307731 command_runner.go:130] >       "useInternalLoopback": false
	I1202 21:09:09.027312  307731 command_runner.go:130] >     },
	I1202 21:09:09.027321  307731 command_runner.go:130] >     "containerd": {
	I1202 21:09:09.027325  307731 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1202 21:09:09.027330  307731 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1202 21:09:09.027335  307731 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1202 21:09:09.027339  307731 command_runner.go:130] >       "runtimes": {
	I1202 21:09:09.027342  307731 command_runner.go:130] >         "runc": {
	I1202 21:09:09.027347  307731 command_runner.go:130] >           "ContainerAnnotations": null,
	I1202 21:09:09.027351  307731 command_runner.go:130] >           "PodAnnotations": null,
	I1202 21:09:09.027357  307731 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1202 21:09:09.027361  307731 command_runner.go:130] >           "cgroupWritable": false,
	I1202 21:09:09.027365  307731 command_runner.go:130] >           "cniConfDir": "",
	I1202 21:09:09.027370  307731 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1202 21:09:09.027374  307731 command_runner.go:130] >           "io_type": "",
	I1202 21:09:09.027378  307731 command_runner.go:130] >           "options": {
	I1202 21:09:09.027382  307731 command_runner.go:130] >             "BinaryName": "",
	I1202 21:09:09.027386  307731 command_runner.go:130] >             "CriuImagePath": "",
	I1202 21:09:09.027390  307731 command_runner.go:130] >             "CriuWorkPath": "",
	I1202 21:09:09.027394  307731 command_runner.go:130] >             "IoGid": 0,
	I1202 21:09:09.027398  307731 command_runner.go:130] >             "IoUid": 0,
	I1202 21:09:09.027402  307731 command_runner.go:130] >             "NoNewKeyring": false,
	I1202 21:09:09.027407  307731 command_runner.go:130] >             "Root": "",
	I1202 21:09:09.027411  307731 command_runner.go:130] >             "ShimCgroup": "",
	I1202 21:09:09.027415  307731 command_runner.go:130] >             "SystemdCgroup": false
	I1202 21:09:09.027418  307731 command_runner.go:130] >           },
	I1202 21:09:09.027424  307731 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1202 21:09:09.027430  307731 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1202 21:09:09.027434  307731 command_runner.go:130] >           "runtimePath": "",
	I1202 21:09:09.027440  307731 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1202 21:09:09.027444  307731 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1202 21:09:09.027451  307731 command_runner.go:130] >           "snapshotter": ""
	I1202 21:09:09.027455  307731 command_runner.go:130] >         }
	I1202 21:09:09.027458  307731 command_runner.go:130] >       }
	I1202 21:09:09.027461  307731 command_runner.go:130] >     },
	I1202 21:09:09.027470  307731 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1202 21:09:09.027476  307731 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1202 21:09:09.027481  307731 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1202 21:09:09.027485  307731 command_runner.go:130] >     "disableApparmor": false,
	I1202 21:09:09.027490  307731 command_runner.go:130] >     "disableHugetlbController": true,
	I1202 21:09:09.027494  307731 command_runner.go:130] >     "disableProcMount": false,
	I1202 21:09:09.027499  307731 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1202 21:09:09.027503  307731 command_runner.go:130] >     "enableCDI": true,
	I1202 21:09:09.027507  307731 command_runner.go:130] >     "enableSelinux": false,
	I1202 21:09:09.027511  307731 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1202 21:09:09.027515  307731 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1202 21:09:09.027520  307731 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1202 21:09:09.027525  307731 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1202 21:09:09.027529  307731 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1202 21:09:09.027534  307731 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1202 21:09:09.027538  307731 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1202 21:09:09.027544  307731 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027548  307731 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1202 21:09:09.027554  307731 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027558  307731 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1202 21:09:09.027563  307731 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1202 21:09:09.027566  307731 command_runner.go:130] >   },
	I1202 21:09:09.027569  307731 command_runner.go:130] >   "features": {
	I1202 21:09:09.027574  307731 command_runner.go:130] >     "supplemental_groups_policy": true
	I1202 21:09:09.027577  307731 command_runner.go:130] >   },
	I1202 21:09:09.027581  307731 command_runner.go:130] >   "golang": "go1.24.9",
	I1202 21:09:09.027591  307731 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027600  307731 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027604  307731 command_runner.go:130] >   "runtimeHandlers": [
	I1202 21:09:09.027610  307731 command_runner.go:130] >     {
	I1202 21:09:09.027614  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027619  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027623  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027626  307731 command_runner.go:130] >       }
	I1202 21:09:09.027629  307731 command_runner.go:130] >     },
	I1202 21:09:09.027631  307731 command_runner.go:130] >     {
	I1202 21:09:09.027635  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027639  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027644  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027646  307731 command_runner.go:130] >       },
	I1202 21:09:09.027650  307731 command_runner.go:130] >       "name": "runc"
	I1202 21:09:09.027653  307731 command_runner.go:130] >     }
	I1202 21:09:09.027656  307731 command_runner.go:130] >   ],
	I1202 21:09:09.027659  307731 command_runner.go:130] >   "status": {
	I1202 21:09:09.027663  307731 command_runner.go:130] >     "conditions": [
	I1202 21:09:09.027666  307731 command_runner.go:130] >       {
	I1202 21:09:09.027670  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027673  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027677  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027681  307731 command_runner.go:130] >         "type": "RuntimeReady"
	I1202 21:09:09.027685  307731 command_runner.go:130] >       },
	I1202 21:09:09.027688  307731 command_runner.go:130] >       {
	I1202 21:09:09.027694  307731 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1202 21:09:09.027699  307731 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1202 21:09:09.027703  307731 command_runner.go:130] >         "status": false,
	I1202 21:09:09.027707  307731 command_runner.go:130] >         "type": "NetworkReady"
	I1202 21:09:09.027710  307731 command_runner.go:130] >       },
	I1202 21:09:09.027713  307731 command_runner.go:130] >       {
	I1202 21:09:09.027718  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027722  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027726  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027731  307731 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1202 21:09:09.027737  307731 command_runner.go:130] >       }
	I1202 21:09:09.027740  307731 command_runner.go:130] >     ]
	I1202 21:09:09.027743  307731 command_runner.go:130] >   }
	I1202 21:09:09.027746  307731 command_runner.go:130] > }
	I1202 21:09:09.029686  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:09.029710  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:09.029745  307731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:09.029776  307731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:09.029910  307731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:09.029985  307731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:09:09.036886  307731 command_runner.go:130] > kubeadm
	I1202 21:09:09.036909  307731 command_runner.go:130] > kubectl
	I1202 21:09:09.036915  307731 command_runner.go:130] > kubelet
	I1202 21:09:09.037789  307731 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:09.037851  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:09.045467  307731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:09:09.058043  307731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:09:09.070239  307731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:09:09.082241  307731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:09.085795  307731 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:09:09.086355  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:09.208713  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:09.542492  307731 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:09:09.542524  307731 certs.go:195] generating shared ca certs ...
	I1202 21:09:09.542541  307731 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:09.542698  307731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:09:09.542757  307731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:09:09.542770  307731 certs.go:257] generating profile certs ...
	I1202 21:09:09.542908  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:09:09.542989  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:09:09.543042  307731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:09:09.543063  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:09:09.543077  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:09:09.543095  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:09:09.543113  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:09:09.543136  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:09:09.543152  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:09:09.543163  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:09:09.543181  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:09:09.543248  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:09:09.543300  307731 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:09:09.543314  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:09.543356  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:09:09.543389  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:09.543418  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:09.543492  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:09.543552  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.543576  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.543600  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem -> /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.544214  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:09.562449  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:09.579657  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:09.597016  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:09:09.615077  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:09:09.633715  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:09:09.651379  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:09.669401  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:09:09.688777  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:09:09.706718  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:09.724108  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:09:09.741960  307731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:09.754915  307731 ssh_runner.go:195] Run: openssl version
	I1202 21:09:09.760531  307731 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:09:09.760935  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:09:09.769169  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772688  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772981  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.773081  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.818276  307731 command_runner.go:130] > 3ec20f2e
	I1202 21:09:09.818787  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:09:09.826520  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:09.834827  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838656  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838686  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838739  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.879212  307731 command_runner.go:130] > b5213941
	I1202 21:09:09.879657  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:09.887484  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:09:09.895881  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899623  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899669  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899717  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.940074  307731 command_runner.go:130] > 51391683
	I1202 21:09:09.940525  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:09:09.948324  307731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951828  307731 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951867  307731 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:09:09.951875  307731 command_runner.go:130] > Device: 259,1	Inode: 1305405     Links: 1
	I1202 21:09:09.951881  307731 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:09.951888  307731 command_runner.go:130] > Access: 2025-12-02 21:05:02.335914079 +0000
	I1202 21:09:09.951894  307731 command_runner.go:130] > Modify: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951898  307731 command_runner.go:130] > Change: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951903  307731 command_runner.go:130] >  Birth: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951997  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:09:09.992474  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:09.992586  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:09:10.044870  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.045432  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:09:10.090412  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.091042  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:09:10.132690  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.133145  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:09:10.173976  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.174453  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:09:10.215639  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.216098  307731 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:10.216220  307731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:10.216321  307731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:10.242158  307731 cri.go:89] found id: ""
	I1202 21:09:10.242234  307731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:10.249118  307731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:09:10.249140  307731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:09:10.249151  307731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:09:10.250041  307731 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:09:10.250060  307731 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:09:10.250140  307731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:09:10.257350  307731 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:09:10.257790  307731 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.257903  307731 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753958" cluster setting kubeconfig missing "functional-753958" context setting]
	I1202 21:09:10.258244  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.258662  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.258838  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.259364  307731 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:09:10.259381  307731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:09:10.259386  307731 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:09:10.259392  307731 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:09:10.259397  307731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:09:10.259441  307731 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:09:10.259684  307731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:09:10.267575  307731 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:09:10.267606  307731 kubeadm.go:602] duration metric: took 17.540251ms to restartPrimaryControlPlane
	I1202 21:09:10.267616  307731 kubeadm.go:403] duration metric: took 51.535685ms to StartCluster
	I1202 21:09:10.267631  307731 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.267694  307731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.268283  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.268485  307731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:09:10.268816  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:10.268866  307731 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:09:10.268984  307731 addons.go:70] Setting storage-provisioner=true in profile "functional-753958"
	I1202 21:09:10.269003  307731 addons.go:239] Setting addon storage-provisioner=true in "functional-753958"
	I1202 21:09:10.269024  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.269023  307731 addons.go:70] Setting default-storageclass=true in profile "functional-753958"
	I1202 21:09:10.269176  307731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-753958"
	I1202 21:09:10.269690  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.269905  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.274878  307731 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:10.279673  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:10.309974  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.310183  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.310507  307731 addons.go:239] Setting addon default-storageclass=true in "functional-753958"
	I1202 21:09:10.310544  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.311034  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.322713  307731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:10.325707  307731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.325729  307731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:10.325795  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.357829  307731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:10.357850  307731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:10.357914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.371695  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.400329  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.499296  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:10.516631  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.547824  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.294654  307731 node_ready.go:35] waiting up to 6m0s for node "functional-753958" to be "Ready" ...
	I1202 21:09:11.294774  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.294779  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.294839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.295227  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.295315  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295463  307731 retry.go:31] will retry after 210.924688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:11.295364  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295550  307731 retry.go:31] will retry after 203.437895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.500110  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:11.506791  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.578640  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.581915  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.581967  307731 retry.go:31] will retry after 400.592485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595609  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.595676  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595708  307731 retry.go:31] will retry after 422.737023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.794907  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:11.982828  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.018958  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.086246  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.086287  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.086307  307731 retry.go:31] will retry after 564.880189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117100  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.117143  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117191  307731 retry.go:31] will retry after 637.534191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.295409  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.295483  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.295805  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.652365  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.710471  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.710580  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.710622  307731 retry.go:31] will retry after 876.325619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.755731  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.795162  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.795277  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.795599  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.835060  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.835099  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.835118  307731 retry.go:31] will retry after 1.227832404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.295855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.295948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.296269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:13.296338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:13.587806  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:13.646676  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:13.646721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.646742  307731 retry.go:31] will retry after 1.443838067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.795158  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.795236  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.795586  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.064081  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:14.123819  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:14.127173  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.127215  307731 retry.go:31] will retry after 1.221247817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.295601  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.295675  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.295968  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.795874  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.796179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.091734  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:15.151479  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.151525  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.151546  307731 retry.go:31] will retry after 1.850953854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.349587  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:15.413525  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.416721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.416752  307731 retry.go:31] will retry after 1.691274377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.795307  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.795621  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:15.795696  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:16.295456  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.295552  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.295874  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:16.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.795755  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.796091  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.003193  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:17.061077  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.064289  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.064321  307731 retry.go:31] will retry after 2.076549374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.108496  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:17.168660  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.168709  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.168731  307731 retry.go:31] will retry after 3.158627903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.295738  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.295812  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.296081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.794893  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:18.294955  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.295057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.295390  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:18.295447  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:18.795090  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.795156  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.795510  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.141123  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:19.199068  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:19.202437  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.202469  307731 retry.go:31] will retry after 2.729492901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.295833  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.295905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.296241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.795344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:20.295255  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.295325  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.295687  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:20.295737  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:20.327882  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:20.391902  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:20.391939  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.391960  307731 retry.go:31] will retry after 4.367650264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.795532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.795920  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.295837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.295923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.296260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.932718  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:21.990698  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:21.990736  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:21.990761  307731 retry.go:31] will retry after 5.196584204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:22.295359  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.295443  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.295788  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:22.295845  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:22.795464  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.795562  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.795917  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.295669  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.295739  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.296001  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.795753  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.796151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.295815  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.295890  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.296207  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:24.296265  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:24.759924  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:24.795570  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.795642  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.795905  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.817214  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:24.821374  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:24.821411  307731 retry.go:31] will retry after 3.851570628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:25.294967  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.295041  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:25.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.295350  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.295431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.795297  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.795366  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.795685  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:26.795740  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:27.188447  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.254238  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:27.254282  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.254304  307731 retry.go:31] will retry after 6.785596085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.295437  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.295865  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:27.794985  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.795057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.294999  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.295384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.674112  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:28.734788  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:28.734834  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.734853  307731 retry.go:31] will retry after 5.470614597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:29.295607  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.295683  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.296024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:29.296105  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:29.794837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.795239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.295136  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.295232  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.295003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.795007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.795289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:31.795338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:32.295580  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.295653  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.295944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:32.795804  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.796241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.294972  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.295049  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.794828  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.794899  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.795152  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:34.040709  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:34.103827  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.103870  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.103890  307731 retry.go:31] will retry after 13.233422448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.206146  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:34.265937  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.265992  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.266011  307731 retry.go:31] will retry after 9.178751123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.295270  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.295377  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.295751  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:34.295808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:34.795590  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.795669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.295449  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.295792  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.795609  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.795985  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.295235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.795205  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.795285  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.795563  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:36.795617  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:37.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:37.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.294875  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.295216  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.794960  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:39.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.295474  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:39.295528  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:39.795755  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.795827  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.796097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.295759  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.295831  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.296122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.794848  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.794921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.795244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.294965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.295255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.794953  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.795034  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:41.795415  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:42.295127  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.295208  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.295661  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:42.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.795105  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.294952  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.295026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.445783  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:43.508150  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:43.508187  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.508208  307731 retry.go:31] will retry after 18.255533178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.795638  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.795730  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:43.796132  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:44.295329  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.295407  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.295673  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:44.795488  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.795564  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.795884  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.295740  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.295822  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.795177  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:46.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.295418  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:46.295474  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:46.795131  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.795532  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.295250  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.295339  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.295611  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.337905  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:47.398412  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:47.398459  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.398478  307731 retry.go:31] will retry after 28.802230035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.794980  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.795304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:48.795347  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:49.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.295302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:49.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.295374  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.295672  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.795457  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.795527  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.795850  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:50.795908  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:51.295904  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.295977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:51.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.794969  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.795305  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.295341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:53.295540  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.295885  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:53.295930  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:53.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.295722  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.296147  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.795496  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:55.295532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.295606  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:55.295984  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:55.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.795835  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.796153  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.294998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.295324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.295124  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.295200  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.295537  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.795227  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.795291  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.795605  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:57.795689  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:58.295413  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.295489  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.295818  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:58.795617  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.295296  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.295368  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.295623  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.794983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.795300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:00.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.295398  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.295706  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:00.295756  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:00.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.795832  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.796237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.295081  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.295430  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.763971  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:01.794859  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.794929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.795196  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.835916  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:01.839908  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:01.839940  307731 retry.go:31] will retry after 30.677466671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:02.295717  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.295826  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.296209  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:02.296289  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:02.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.795406  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.295176  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.295453  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.794940  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.795026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.795356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.295114  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.295196  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.295536  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:04.796171  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:05.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.294934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.295264  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:05.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.295514  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.295601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.295881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.795664  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.795741  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.796081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:07.294800  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.294876  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:07.295261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:07.795446  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.795518  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.795780  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.295543  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.295937  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.795803  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.795884  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.796321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:09.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:09.295304  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:09.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.795434  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.295294  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.295369  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.295705  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.795493  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.795577  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.795953  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:11.295781  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.295870  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:11.296268  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:11.794950  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.795368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.295128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.294954  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.795197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:13.795238  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:14.294945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.295037  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.295425  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:14.795147  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.795224  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.795562  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.295257  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.295338  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.295612  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:15.795380  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:16.200937  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:16.256562  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:16.259927  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.259959  307731 retry.go:31] will retry after 18.923209073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.295107  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.295189  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.295558  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:16.794811  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.295260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.795031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:18.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.295206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:18.295258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:18.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.295370  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.794970  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:20.295271  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.295345  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.295682  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:20.295746  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:20.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.795586  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.795908  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.295457  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.295714  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.795556  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.795949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:22.295726  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.296133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:22.296198  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:22.795455  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.795537  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.795801  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.295679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.296049  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.795807  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.796143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.294826  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.294902  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.295188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.795928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.796234  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:24.796284  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:25.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.294948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:25.794855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.795034  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.795438  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:27.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:27.295395  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:27.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.794984  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:29.294923  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:29.295418  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:29.795094  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.795169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.795504  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.295472  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.295550  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.295841  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.795750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.796084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.294839  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.295203  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.794808  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:31.795189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:32.294872  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:32.517612  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:32.588466  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591823  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591933  307731 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:10:32.795459  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.795532  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.795852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.295131  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.295202  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.794891  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.794962  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:33.795314  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:34.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:34.795003  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.795074  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.795374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.183965  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:35.239016  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:35.242188  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.242221  307731 retry.go:31] will retry after 25.961571555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.295555  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.295639  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.295975  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.796134  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:35.796175  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:36.295019  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.295091  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.295347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:36.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.795019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.295060  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.795743  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.795817  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:38.295875  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.295951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.296303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:38.296363  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:38.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.295633  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.295705  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.795820  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.795894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:40.295848  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.295936  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.296337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:40.296429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:40.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.795169  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.294994  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.795377  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.295192  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.795316  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.795641  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:42.795694  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:43.295520  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.295594  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:43.795644  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.795714  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.795981  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.295768  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.295846  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.296173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.794885  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:45.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:45.295340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:45.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.295077  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.295153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.795257  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.795513  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.795380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:47.795437  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:48.295512  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.295579  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.295842  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:48.795623  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.795698  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.295731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.296154  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.795443  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.795545  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.795873  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:49.795941  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:50.295652  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.295726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.296078  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:50.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.795808  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.796159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.295466  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.295534  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.295787  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.795602  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.795679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.796007  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:51.796073  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:52.295850  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:52.794970  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.795045  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.294905  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.794897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.794971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:54.295102  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.295441  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:54.295539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:54.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.295052  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.795785  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.795851  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.796131  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.295386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.795230  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.795573  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:56.795626  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:57.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.294906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.295200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:57.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.795292  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.294897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.294972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.795026  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.795092  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:59.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.295353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:59.295412  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:59.795027  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.795102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.795393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.298236  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.298341  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.298735  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.795120  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.795194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:01.204061  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:11:01.267039  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267090  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267174  307731 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:11:01.270170  307731 out.go:179] * Enabled addons: 
	I1202 21:11:01.273921  307731 addons.go:530] duration metric: took 1m51.005043213s for enable addons: enabled=[]
	I1202 21:11:01.295263  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.295359  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.295653  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:01.295706  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:01.795541  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.795971  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.295791  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.295861  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.294886  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.795031  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.795108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.795398  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:03.795445  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:04.295135  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.295207  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.295489  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:04.797772  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.797855  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.798166  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.795024  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.795114  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.795840  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:05.795891  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:06.295375  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.295699  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:06.795562  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.795637  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.295853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.795390  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.795462  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.795723  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:08.295538  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.295622  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.295961  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:08.296019  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:08.795765  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.795839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.796212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.295353  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.295424  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.295732  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.795220  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.795301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.795760  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:10.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.295830  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:10.296275  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:10.794847  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.794927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.795204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.295063  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.295142  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.295478  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.795260  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.795582  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.294899  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.295257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:12.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:13.295067  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.295150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.295484  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:13.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:15.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.295024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:15.295317  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:15.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.794897  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.295258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:17.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:18.294837  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.295270  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:18.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.295034  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.295446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.795197  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.795502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:19.795550  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:20.295498  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.295582  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.295890  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:20.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.795745  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.796070  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.294797  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.294862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.295106  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.795856  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.795927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.796206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:21.796258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.295002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.295336  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:22.795444  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.795821  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.295637  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.295716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.296030  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.795816  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.795911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.796220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:24.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.295400  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:24.295449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:24.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.795056  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.294949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.295023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.295327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.795726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.795991  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.294874  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.295297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.795064  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:26.795449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:27.295101  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.295170  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.295451  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:27.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.795354  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.795573  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.795646  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:28.795938  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:29.295736  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.296135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:29.794877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.295169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:31.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:31.295398  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:31.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.795188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.294898  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.295108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:33.795429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:34.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.294989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:34.795011  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.795087  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.795342  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.295337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.795066  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.795146  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.795473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:35.795529  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:36.295315  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.295394  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.295654  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:36.795469  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.795546  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.795881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.295777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.296183  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.795356  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.795431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.795698  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:37.795750  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:38.295449  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.295517  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:38.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.795731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.295366  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.295436  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.295758  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.795668  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:39.796055  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:40.294852  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.295284  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:40.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.794934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.795237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.295084  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.295163  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.295481  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:42.295575  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.295656  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.295978  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:42.296030  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:42.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.795869  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.796202  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.295844  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.295922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.296257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.795435  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.795509  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.795804  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:44.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.295700  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.296029  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:44.296112  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:44.794813  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.794887  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.295025  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.295309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.295180  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.295255  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.295594  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.795733  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.795806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:46.796126  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:47.294799  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.294879  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.295242  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:47.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.795217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.295217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.795020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:49.294951  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.295028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:49.295407  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:49.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.795752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.796093  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.295858  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.296181  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.795022  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.795327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.294892  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.294961  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.795369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:51.795425  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:52.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.295183  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:52.795678  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.796004  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.295812  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.295892  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.296208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.795862  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.795942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.796296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:53.796344  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:54.294832  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.295145  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:54.794887  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.794967  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.795291  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.294921  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.295281  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.795485  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.795558  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.795809  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:56.295721  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.295797  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.296098  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:56.296148  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:56.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.295605  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.295673  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.295938  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.795802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.796121  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.294836  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.294913  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.794825  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.794896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:58.795190  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:59.294877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.295267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:59.794990  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.795067  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.795410  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.308704  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.308789  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.309104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.794956  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:00.795332  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:01.294968  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.295473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:01.794941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.294948  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.295043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:02.795477  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:03.295203  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.295281  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.295626  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:03.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.795507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.795802  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.295252  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.295319  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.295618  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.795525  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.795601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:04.796063  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:05.295831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.295911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:05.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.794932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.795240  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.294953  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.295030  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.295362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.795000  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.795417  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:07.294864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.295204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:07.295255  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:07.794954  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.295005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.795520  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.795777  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:09.295573  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.295651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.295959  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:09.296007  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:09.795632  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.795716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.295748  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.295818  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.795857  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.795938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.796244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.294935  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.795294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:11.795346  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:12.294901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.294921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.295173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:13.795372  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:14.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:14.795456  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.795525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.795875  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.295665  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.295750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.296104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.795269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:16.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.294937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.295192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:16.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:16.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.795044  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.295380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.796052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:18.295868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.295939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.296239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:18.296278  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:18.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.794946  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.795296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.295053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.794942  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.295100  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.795156  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.795229  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:20.795539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:21.295446  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.295525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.295851  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:21.795679  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.795753  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.796086  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.295843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.295924  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.296179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.794901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:23.294961  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.295032  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.295330  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:23.295374  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:23.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.795578  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.795879  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.295535  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.295609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.295949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.795766  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.795904  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.796239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.294881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.295138  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.794905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.795241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:25.795296  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:26.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:26.795516  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.795596  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.795868  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.295674  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.295752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.795851  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.795930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.796225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:27.796269  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:28.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.295262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:28.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.294980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.295344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.795043  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.795431  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:30.295491  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.295854  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:30.295900  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:30.795622  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.795701  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.295224  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.295021  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.295094  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.295426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.795261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:32.795313  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:33.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.294988  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.295274  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:33.794936  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.294978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.295357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:34.795369  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:35.294958  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:35.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.795262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.294964  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.295312  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.795319  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:37.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.294923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:37.295279  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:37.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.795062  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.295093  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.295216  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.295502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.795751  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.795829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.796083  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.295189  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.794951  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:39.795360  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:40.295208  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.295278  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.295547  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:40.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.795303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.295146  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.295226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.295541  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.795842  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.795912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.796200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:41.796251  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:42.294960  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.295487  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:42.795065  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.795138  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.295783  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.794833  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:44.294965  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.295055  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.295393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:44.295450  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:44.794874  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.794942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.295214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.295767  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.795233  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.795311  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.795638  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:46.295163  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.295245  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.295588  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:46.295650  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:46.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.295090  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.295497  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.794869  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.294861  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.295271  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.794940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:48.795342  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:49.294843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.294911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.295164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:49.794846  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.794949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.295060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.795576  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.795648  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.795900  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:50.795939  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:51.295852  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.295925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.296265  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:51.794918  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.795350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.294960  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.295280  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.795353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:53.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.295126  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.295420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:53.295466  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:53.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.794894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.295276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.795118  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.294835  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.294908  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.295159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.794871  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.794955  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:55.795414  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:56.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.295014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.295303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:56.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.794965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:58.294803  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.294871  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.295161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:58.295224  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:58.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.795009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.795298  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.295027  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.295104  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.794850  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.795190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:00.295830  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.295907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.296237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:00.296286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:00.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.295254  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.795411  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.795414  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.795493  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:02.795808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:03.295626  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.295706  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.296056  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:03.795867  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.795947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.796294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.294876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.295212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.794889  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.795297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:05.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.295111  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.295416  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:05.295461  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:05.795108  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.295448  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.295528  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.296185  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.794905  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.795346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:07.295651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.295719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.296051  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:07.296110  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:07.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.795926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.796263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.294869  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.795548  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.795627  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.795895  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:09.295682  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.295756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.296097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:09.296151  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:09.794843  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.795258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.295332  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.295413  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.795553  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.796008  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:11.295865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.295935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.296253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:11.296301  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:11.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.796123  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.795041  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.795119  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.795456  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.295760  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.296010  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.795805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.796135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:13.796187  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:14.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:14.795004  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.795086  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.794965  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.795420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:16.294820  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:16.295299  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:16.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.795324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.795483  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.795554  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.795826  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:18.295595  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.295669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.296052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:18.296108  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:18.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.795799  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.796125  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.295390  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.295507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.295770  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.795944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:20.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.295849  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.296214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:20.296270  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:20.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.795888  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.295858  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.296299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.795128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.795467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.295167  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.295249  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:22.795386  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:23.294912  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.295388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:23.795649  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.795757  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.295857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.295930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.296228  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.794835  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.795214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:25.294914  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.295261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:25.295309  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:25.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.795129  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.795387  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:27.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.295010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:27.295406  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:27.795057  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.795135  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.294926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.295180  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.795601  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.795676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.796027  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:29.295836  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.295912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.296231  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:29.296292  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:29.794829  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.794900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.795151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.295730  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.296126  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.794915  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.795249  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:31.297776  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.297853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.298178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:31.298228  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:31.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.295433  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.795735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.795800  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.796165  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.295304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.795016  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.795321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:33.795370  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:34.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.295184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:34.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.794945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.294879  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.294959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.794925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.795178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:36.294917  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.294991  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:36.295373  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:36.795049  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.295735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.295805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.296066  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.795800  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.795873  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.796213  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:38.295712  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.295790  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.296136  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:38.296189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:38.794842  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.795163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.294845  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.294918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.295048  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.295117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.794900  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:40.795390  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:41.294896  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.294974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.295282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:41.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.795060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.795375  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.295106  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.295194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.794935  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.795335  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:43.294846  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.294916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.295163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:43.295211  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:43.794883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.795651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.795720  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.795982  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:45.295757  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.295838  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.296285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:45.296345  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:45.795035  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.795117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.795459  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.295269  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.295336  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.794865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.795193  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:47.795233  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:48.294918  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:48.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.795165  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.795501  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.295207  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.295288  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.295554  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.795252  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.795322  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.795632  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:49.795684  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:50.295531  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.295604  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.295957  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:50.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.795608  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.796073  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.295825  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.296243  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.795338  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:52.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.294945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.295299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:52.295371  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:52.794884  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.295056  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.295127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.295442  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.794868  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.795222  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.294840  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.294920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.795316  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:54.795366  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:55.295563  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.295904  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:55.795697  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.795777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.796113  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.294907  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.794944  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.795238  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:57.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.295369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:57.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:57.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.795051  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.294806  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.294875  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.295122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.794910  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.795229  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.295361  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.795054  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.795386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:59.795436  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:00.295787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.295877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:00.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.795357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.294857  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.295226  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.795098  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.795437  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:01.795499  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:02.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.295413  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:02.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.795756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.796018  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.295834  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.295906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.296221  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:04.295602  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.295676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.296005  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:04.296061  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:04.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.795363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.295123  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.295457  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.795144  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.295374  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.295743  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.795004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.795340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:06.795401  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:07.295611  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.295678  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:07.795672  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.795746  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.796102  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.295447  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.295852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.795225  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.795296  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.795548  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:08.795589  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:09.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.295329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:09.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.295213  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.295283  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.295555  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.794913  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.794989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.795326  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:11.294894  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.294973  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.295333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:11.295391  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:11.794858  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.794926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.795184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.794998  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.795409  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:13.295664  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.295731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:13.296034  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:13.795754  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.294862  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.795651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.795954  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:15.295756  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.295834  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.296219  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:15.296293  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.794990  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.294940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.795371  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.295083  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.295533  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.795809  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.795877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.796133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:17.796172  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:18.294860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:18.794860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.295036  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.295289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.794925  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.795302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:20.295739  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.296151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:20.296213  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:20.795440  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.795763  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.295657  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.295765  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.296103  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.795787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.795862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.796230  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.294978  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:22.795368  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:23.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.295000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.295323  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:23.795654  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.795728  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.795993  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.295761  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.295839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.296161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.794986  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:25.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.294935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.295190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:25.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:25.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.295020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.295383  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.795713  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.795787  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.796101  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:27.294827  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.294901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:27.295286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:27.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.794916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.294946  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.295278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.795366  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:29.295050  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:29.295536  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:29.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.795842  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.796119  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.295026  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.295100  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.295424  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.795136  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.795210  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:31.295356  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.295420  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.295666  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:31.295705  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:31.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.795523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.295545  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.295621  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.295915  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.795216  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.795294  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.795032  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.795113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.795460  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:33.795521  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:34.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.295175  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:34.794878  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.794952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.795309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.295020  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.295113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.295444  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.795727  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.795796  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.796110  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:35.796169  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:36.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.295256  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:36.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.795012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.794972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:38.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:38.295411  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:38.795081  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.795152  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.295318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.795068  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:40.295494  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:40.295880  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:40.795619  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.795692  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.796024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.795647  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.795719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:42.295806  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.295896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.296282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:42.296340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:42.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.795349  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.295078  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.295167  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.295472  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.794967  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.795039  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.295072  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.295155  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.795154  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.795226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.795482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:44.795526  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:45.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.295173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.295911  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:45.795805  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.796164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.795381  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:47.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:47.295354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:47.794995  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.795072  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.294900  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.794879  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.794954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.294865  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.795293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:49.795354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:50.295309  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.295381  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.295715  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:50.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.295008  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.295359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.795069  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:51.795574  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:52.295227  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.295298  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.295552  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:52.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.295449  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.795218  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:54.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:54.295377  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:54.795076  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.795150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.295170  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.295241  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.295544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:56.295064  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.295148  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.295496  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:56.295551  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:56.795788  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.795881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.796235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.295029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.795157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.795491  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:58.295762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.295829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.296084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:58.296124  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:58.795828  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.795901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.796192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.294890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.294971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.795662  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.294841  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.295288  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:00.795520  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:01.295565  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:01.795669  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.794984  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.795058  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.795384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:03.294942  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:03.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:03.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.295082  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.295157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.295429  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.795043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.795426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.795116  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.795195  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.795515  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:05.795575  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:06.295370  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.295451  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.295771  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:06.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.795617  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.795962  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.295701  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.295775  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.296023  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.795795  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.795872  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:07.796261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:08.294940  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:08.794862  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.794931  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.295352  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.795162  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.795514  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:10.299197  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.299301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.299703  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:10.299761  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:10.795524  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.795615  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:11.294844  307731 node_ready.go:38] duration metric: took 6m0.000140797s for node "functional-753958" to be "Ready" ...
	I1202 21:15:11.298019  307731 out.go:203] 
	W1202 21:15:11.300907  307731 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:15:11.300927  307731 out.go:285] * 
	W1202 21:15:11.303086  307731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:15:11.306181  307731 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831249053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831259104Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831273446Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831285557Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831297339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831308793Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831326376Z" level=info msg="runtime interface created"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831333055Z" level=info msg="created NRI interface"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831343081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831374530Z" level=info msg="Connect containerd service"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.831645996Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.832751142Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.842854029Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843061325Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843124724Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.843068282Z" level=info msg="Start recovering state"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861144221Z" level=info msg="Start event monitor"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861331530Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861398235Z" level=info msg="Start streaming server"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861467148Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861523835Z" level=info msg="runtime interface starting up..."
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861585675Z" level=info msg="starting plugins..."
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.861664368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:09:08 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:09:08 functional-753958 containerd[5832]: time="2025-12-02T21:09:08.864406089Z" level=info msg="containerd successfully booted in 0.055350s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:15:15.398846    9171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:15.399263    9171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:15.400776    9171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:15.401095    9171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:15.402526    9171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:15:15 up  2:57,  0 user,  load average: 0.29, 0.35, 0.88
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:15:11 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 02 21:15:12 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:12 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:12 functional-753958 kubelet[8951]: E1202 21:15:12.601622    8951 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:12 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:13 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 02 21:15:13 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:13 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:13 functional-753958 kubelet[9046]: E1202 21:15:13.358322    9046 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:13 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:13 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:13 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 02 21:15:13 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:13 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:14 functional-753958 kubelet[9065]: E1202 21:15:14.044739    9065 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:14 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:14 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:14 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 02 21:15:14 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:14 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:14 functional-753958 kubelet[9088]: E1202 21:15:14.852587    9088 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:14 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:14 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (379.042928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 kubectl -- --context functional-753958 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 kubectl -- --context functional-753958 get pods: exit status 1 (110.421955ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-753958 kubectl -- --context functional-753958 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (304.087166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446665 image ls --format yaml --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format short --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format json --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format table --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh     │ functional-446665 ssh pgrep buildkitd                                                                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image   │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                  │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls                                                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete  │ -p functional-446665                                                                                                                                    │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start   │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start   │ -p functional-753958 --alsologtostderr -v=8                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:latest                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add minikube-local-cache-test:functional-753958                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache delete minikube-local-cache-test:functional-753958                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl images                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ cache   │ functional-753958 cache reload                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ kubectl │ functional-753958 kubectl -- --context functional-753958 get pods                                                                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:09:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:09:05.869127  307731 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:09:05.869342  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.869372  307731 out.go:374] Setting ErrFile to fd 2...
	I1202 21:09:05.869392  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.870120  307731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:09:05.870642  307731 out.go:368] Setting JSON to false
	I1202 21:09:05.871532  307731 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10284,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:09:05.871698  307731 start.go:143] virtualization:  
	I1202 21:09:05.875240  307731 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:09:05.878196  307731 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:09:05.878269  307731 notify.go:221] Checking for updates...
	I1202 21:09:05.884072  307731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:09:05.886942  307731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:05.889899  307731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:09:05.892813  307731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:09:05.895771  307731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:09:05.899217  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:05.899365  307731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:09:05.932799  307731 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:09:05.932919  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:05.993966  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:05.984741651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:05.994072  307731 docker.go:319] overlay module found
	I1202 21:09:05.997248  307731 out.go:179] * Using the docker driver based on existing profile
	I1202 21:09:06.000038  307731 start.go:309] selected driver: docker
	I1202 21:09:06.000060  307731 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.000154  307731 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:09:06.000264  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:06.066709  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:06.057768194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:06.067144  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:06.067209  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:06.067263  307731 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.070421  307731 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:09:06.073261  307731 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:09:06.078117  307731 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:09:06.080953  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:06.081041  307731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:09:06.101516  307731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:09:06.101541  307731 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:09:06.138751  307731 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:09:06.314468  307731 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:09:06.314628  307731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:09:06.314753  307731 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314852  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:09:06.314868  307731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.02µs
	I1202 21:09:06.314884  307731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:09:06.314900  307731 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314935  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:09:06.314945  307731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.735µs
	I1202 21:09:06.314952  307731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:09:06.314968  307731 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315000  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:09:06.315009  307731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.764µs
	I1202 21:09:06.315016  307731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315030  307731 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315059  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:09:06.315069  307731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.875µs
	I1202 21:09:06.315075  307731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315089  307731 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315119  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:09:06.315127  307731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.629µs
	I1202 21:09:06.315144  307731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315143  307731 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:09:06.315177  307731 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315202  307731 start.go:364] duration metric: took 13.3µs to acquireMachinesLock for "functional-753958"
	I1202 21:09:06.315219  307731 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:09:06.315230  307731 fix.go:54] fixHost starting: 
	I1202 21:09:06.315183  307731 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315307  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:09:06.315332  307731 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 153.571µs
	I1202 21:09:06.315357  307731 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:09:06.315387  307731 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315443  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:09:06.315465  307731 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.424µs
	I1202 21:09:06.315488  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:06.315527  307731 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315588  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:09:06.315619  307731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 95.488µs
	I1202 21:09:06.315640  307731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:09:06.315489  307731 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:09:06.315801  307731 cache.go:87] Successfully saved all images to host disk.
	I1202 21:09:06.333736  307731 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:09:06.333771  307731 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:09:06.337175  307731 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:09:06.337206  307731 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:06.337301  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.354474  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.354810  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.354830  307731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:06.501197  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.501220  307731 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:09:06.501288  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.519375  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.519710  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.519727  307731 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:09:06.687724  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.687814  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.707419  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.707758  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.707780  307731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:06.858340  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:06.858365  307731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:09:06.858387  307731 ubuntu.go:190] setting up certificates
	I1202 21:09:06.858407  307731 provision.go:84] configureAuth start
	I1202 21:09:06.858472  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:06.877925  307731 provision.go:143] copyHostCerts
	I1202 21:09:06.877980  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878020  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:09:06.878036  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878121  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:09:06.878219  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878244  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:09:06.878253  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878283  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:09:06.878341  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878361  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:09:06.878366  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878392  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:09:06.878454  307731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:09:07.212788  307731 provision.go:177] copyRemoteCerts
	I1202 21:09:07.212871  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:07.212914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.229990  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.334622  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:09:07.334690  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:09:07.358156  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:09:07.358212  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:07.374829  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:09:07.374936  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:09:07.391856  307731 provision.go:87] duration metric: took 533.420534ms to configureAuth
	I1202 21:09:07.391883  307731 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:07.392075  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:07.392088  307731 machine.go:97] duration metric: took 1.054874904s to provisionDockerMachine
	I1202 21:09:07.392096  307731 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:09:07.392108  307731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:07.392158  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:07.392201  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.409892  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.513929  307731 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:07.517313  307731 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:09:07.517377  307731 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:09:07.517399  307731 command_runner.go:130] > VERSION_ID="12"
	I1202 21:09:07.517411  307731 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:09:07.517423  307731 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:09:07.517428  307731 command_runner.go:130] > ID=debian
	I1202 21:09:07.517432  307731 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:09:07.517437  307731 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:09:07.517460  307731 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:09:07.517505  307731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:07.517555  307731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:07.517574  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:09:07.517638  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:09:07.517741  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:09:07.517755  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /etc/ssl/certs/2632412.pem
	I1202 21:09:07.517830  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:09:07.517839  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> /etc/test/nested/copy/263241/hosts
	I1202 21:09:07.517882  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:09:07.525639  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:07.543648  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:09:07.560944  307731 start.go:296] duration metric: took 168.831988ms for postStartSetup
	I1202 21:09:07.561067  307731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:07.561116  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.579622  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.682695  307731 command_runner.go:130] > 12%
	I1202 21:09:07.682778  307731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:07.687210  307731 command_runner.go:130] > 172G
	I1202 21:09:07.687707  307731 fix.go:56] duration metric: took 1.372471826s for fixHost
	I1202 21:09:07.687729  307731 start.go:83] releasing machines lock for "functional-753958", held for 1.372515567s
	I1202 21:09:07.687799  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:07.704780  307731 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:07.704833  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.704860  307731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:07.704931  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.726613  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.737148  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.829144  307731 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:09:07.829307  307731 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:07.919742  307731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:09:07.919788  307731 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:09:07.919811  307731 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:09:07.919883  307731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:09:07.924332  307731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:09:07.924495  307731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:07.924590  307731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:07.932451  307731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:09:07.932475  307731 start.go:496] detecting cgroup driver to use...
	I1202 21:09:07.932505  307731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:07.932553  307731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:09:07.947902  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:09:07.964330  307731 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:07.964400  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:07.980760  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:07.995134  307731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:08.122567  307731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:08.232585  307731 docker.go:234] disabling docker service ...
	I1202 21:09:08.232660  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:08.247806  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:08.260075  307731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:08.380227  307731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:08.498586  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:08.511975  307731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:08.525630  307731 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1202 21:09:08.525792  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:09:08.534331  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:09:08.543412  307731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:09:08.543534  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:09:08.552561  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.561268  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:09:08.570127  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.578716  307731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:08.586804  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:09:08.595543  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:09:08.604412  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:09:08.613462  307731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:08.620008  307731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:09:08.621008  307731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:08.628262  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:08.744391  307731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:09:08.864675  307731 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:09:08.864794  307731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:09:08.868351  307731 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1202 21:09:08.868411  307731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:09:08.868454  307731 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1202 21:09:08.868480  307731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:08.868521  307731 command_runner.go:130] > Access: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868544  307731 command_runner.go:130] > Modify: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868569  307731 command_runner.go:130] > Change: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868599  307731 command_runner.go:130] >  Birth: -
	I1202 21:09:08.868892  307731 start.go:564] Will wait 60s for crictl version
	I1202 21:09:08.868989  307731 ssh_runner.go:195] Run: which crictl
	I1202 21:09:08.872054  307731 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:09:08.872553  307731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:08.897996  307731 command_runner.go:130] > Version:  0.1.0
	I1202 21:09:08.898089  307731 command_runner.go:130] > RuntimeName:  containerd
	I1202 21:09:08.898120  307731 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1202 21:09:08.898152  307731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:09:08.900685  307731 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:09:08.900802  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.918917  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.920319  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.938561  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.945896  307731 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:09:08.948895  307731 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:08.964797  307731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:08.968415  307731 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:09:08.968697  307731 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:08.968812  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:08.968871  307731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:08.989960  307731 command_runner.go:130] > {
	I1202 21:09:08.989978  307731 command_runner.go:130] >   "images":  [
	I1202 21:09:08.989982  307731 command_runner.go:130] >     {
	I1202 21:09:08.989991  307731 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:09:08.989996  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990002  307731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:09:08.990005  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990009  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990013  307731 command_runner.go:130] >       "size":  "8032639",
	I1202 21:09:08.990018  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990022  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990025  307731 command_runner.go:130] >     },
	I1202 21:09:08.990027  307731 command_runner.go:130] >     {
	I1202 21:09:08.990039  307731 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:09:08.990044  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990049  307731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:09:08.990052  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990057  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990066  307731 command_runner.go:130] >       "size":  "21166088",
	I1202 21:09:08.990071  307731 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:09:08.990075  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990078  307731 command_runner.go:130] >     },
	I1202 21:09:08.990085  307731 command_runner.go:130] >     {
	I1202 21:09:08.990092  307731 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:09:08.990096  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990101  307731 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:09:08.990104  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990108  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990112  307731 command_runner.go:130] >       "size":  "21134420",
	I1202 21:09:08.990116  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990120  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990123  307731 command_runner.go:130] >       },
	I1202 21:09:08.990126  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990130  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990133  307731 command_runner.go:130] >     },
	I1202 21:09:08.990136  307731 command_runner.go:130] >     {
	I1202 21:09:08.990143  307731 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:09:08.990147  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990156  307731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:09:08.990159  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990163  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990167  307731 command_runner.go:130] >       "size":  "24676285",
	I1202 21:09:08.990170  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990175  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990178  307731 command_runner.go:130] >       },
	I1202 21:09:08.990182  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990189  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990192  307731 command_runner.go:130] >     },
	I1202 21:09:08.990195  307731 command_runner.go:130] >     {
	I1202 21:09:08.990202  307731 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:09:08.990206  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990213  307731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:09:08.990216  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990220  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990224  307731 command_runner.go:130] >       "size":  "20658969",
	I1202 21:09:08.990227  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990231  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990233  307731 command_runner.go:130] >       },
	I1202 21:09:08.990237  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990241  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990244  307731 command_runner.go:130] >     },
	I1202 21:09:08.990246  307731 command_runner.go:130] >     {
	I1202 21:09:08.990253  307731 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:09:08.990257  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990262  307731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:09:08.990265  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990269  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990273  307731 command_runner.go:130] >       "size":  "22428165",
	I1202 21:09:08.990277  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990280  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990283  307731 command_runner.go:130] >     },
	I1202 21:09:08.990287  307731 command_runner.go:130] >     {
	I1202 21:09:08.990293  307731 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:09:08.990297  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990302  307731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:09:08.990305  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990314  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990318  307731 command_runner.go:130] >       "size":  "15389290",
	I1202 21:09:08.990322  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990329  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990332  307731 command_runner.go:130] >       },
	I1202 21:09:08.990336  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990339  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990342  307731 command_runner.go:130] >     },
	I1202 21:09:08.990345  307731 command_runner.go:130] >     {
	I1202 21:09:08.990352  307731 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:09:08.990356  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990361  307731 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:09:08.990364  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990371  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990375  307731 command_runner.go:130] >       "size":  "265458",
	I1202 21:09:08.990379  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990383  307731 command_runner.go:130] >         "value":  "65535"
	I1202 21:09:08.990386  307731 command_runner.go:130] >       },
	I1202 21:09:08.990389  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990393  307731 command_runner.go:130] >       "pinned":  true
	I1202 21:09:08.990396  307731 command_runner.go:130] >     }
	I1202 21:09:08.990402  307731 command_runner.go:130] >   ]
	I1202 21:09:08.990404  307731 command_runner.go:130] > }
	I1202 21:09:08.992021  307731 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:09:08.992044  307731 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:08.992052  307731 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:09:08.992155  307731 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:08.992222  307731 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:09:09.027109  307731 command_runner.go:130] > {
	I1202 21:09:09.027127  307731 command_runner.go:130] >   "cniconfig": {
	I1202 21:09:09.027132  307731 command_runner.go:130] >     "Networks": [
	I1202 21:09:09.027136  307731 command_runner.go:130] >       {
	I1202 21:09:09.027142  307731 command_runner.go:130] >         "Config": {
	I1202 21:09:09.027146  307731 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1202 21:09:09.027151  307731 command_runner.go:130] >           "Name": "cni-loopback",
	I1202 21:09:09.027155  307731 command_runner.go:130] >           "Plugins": [
	I1202 21:09:09.027164  307731 command_runner.go:130] >             {
	I1202 21:09:09.027168  307731 command_runner.go:130] >               "Network": {
	I1202 21:09:09.027172  307731 command_runner.go:130] >                 "ipam": {},
	I1202 21:09:09.027178  307731 command_runner.go:130] >                 "type": "loopback"
	I1202 21:09:09.027181  307731 command_runner.go:130] >               },
	I1202 21:09:09.027186  307731 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1202 21:09:09.027189  307731 command_runner.go:130] >             }
	I1202 21:09:09.027193  307731 command_runner.go:130] >           ],
	I1202 21:09:09.027203  307731 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1202 21:09:09.027207  307731 command_runner.go:130] >         },
	I1202 21:09:09.027212  307731 command_runner.go:130] >         "IFName": "lo"
	I1202 21:09:09.027215  307731 command_runner.go:130] >       }
	I1202 21:09:09.027218  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027223  307731 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1202 21:09:09.027227  307731 command_runner.go:130] >     "PluginDirs": [
	I1202 21:09:09.027230  307731 command_runner.go:130] >       "/opt/cni/bin"
	I1202 21:09:09.027234  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027238  307731 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1202 21:09:09.027242  307731 command_runner.go:130] >     "Prefix": "eth"
	I1202 21:09:09.027245  307731 command_runner.go:130] >   },
	I1202 21:09:09.027248  307731 command_runner.go:130] >   "config": {
	I1202 21:09:09.027252  307731 command_runner.go:130] >     "cdiSpecDirs": [
	I1202 21:09:09.027256  307731 command_runner.go:130] >       "/etc/cdi",
	I1202 21:09:09.027259  307731 command_runner.go:130] >       "/var/run/cdi"
	I1202 21:09:09.027263  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027266  307731 command_runner.go:130] >     "cni": {
	I1202 21:09:09.027269  307731 command_runner.go:130] >       "binDir": "",
	I1202 21:09:09.027273  307731 command_runner.go:130] >       "binDirs": [
	I1202 21:09:09.027277  307731 command_runner.go:130] >         "/opt/cni/bin"
	I1202 21:09:09.027280  307731 command_runner.go:130] >       ],
	I1202 21:09:09.027285  307731 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1202 21:09:09.027289  307731 command_runner.go:130] >       "confTemplate": "",
	I1202 21:09:09.027292  307731 command_runner.go:130] >       "ipPref": "",
	I1202 21:09:09.027300  307731 command_runner.go:130] >       "maxConfNum": 1,
	I1202 21:09:09.027304  307731 command_runner.go:130] >       "setupSerially": false,
	I1202 21:09:09.027309  307731 command_runner.go:130] >       "useInternalLoopback": false
	I1202 21:09:09.027312  307731 command_runner.go:130] >     },
	I1202 21:09:09.027321  307731 command_runner.go:130] >     "containerd": {
	I1202 21:09:09.027325  307731 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1202 21:09:09.027330  307731 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1202 21:09:09.027335  307731 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1202 21:09:09.027339  307731 command_runner.go:130] >       "runtimes": {
	I1202 21:09:09.027342  307731 command_runner.go:130] >         "runc": {
	I1202 21:09:09.027347  307731 command_runner.go:130] >           "ContainerAnnotations": null,
	I1202 21:09:09.027351  307731 command_runner.go:130] >           "PodAnnotations": null,
	I1202 21:09:09.027357  307731 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1202 21:09:09.027361  307731 command_runner.go:130] >           "cgroupWritable": false,
	I1202 21:09:09.027365  307731 command_runner.go:130] >           "cniConfDir": "",
	I1202 21:09:09.027370  307731 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1202 21:09:09.027374  307731 command_runner.go:130] >           "io_type": "",
	I1202 21:09:09.027378  307731 command_runner.go:130] >           "options": {
	I1202 21:09:09.027382  307731 command_runner.go:130] >             "BinaryName": "",
	I1202 21:09:09.027386  307731 command_runner.go:130] >             "CriuImagePath": "",
	I1202 21:09:09.027390  307731 command_runner.go:130] >             "CriuWorkPath": "",
	I1202 21:09:09.027394  307731 command_runner.go:130] >             "IoGid": 0,
	I1202 21:09:09.027398  307731 command_runner.go:130] >             "IoUid": 0,
	I1202 21:09:09.027402  307731 command_runner.go:130] >             "NoNewKeyring": false,
	I1202 21:09:09.027407  307731 command_runner.go:130] >             "Root": "",
	I1202 21:09:09.027411  307731 command_runner.go:130] >             "ShimCgroup": "",
	I1202 21:09:09.027415  307731 command_runner.go:130] >             "SystemdCgroup": false
	I1202 21:09:09.027418  307731 command_runner.go:130] >           },
	I1202 21:09:09.027424  307731 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1202 21:09:09.027430  307731 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1202 21:09:09.027434  307731 command_runner.go:130] >           "runtimePath": "",
	I1202 21:09:09.027440  307731 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1202 21:09:09.027444  307731 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1202 21:09:09.027451  307731 command_runner.go:130] >           "snapshotter": ""
	I1202 21:09:09.027455  307731 command_runner.go:130] >         }
	I1202 21:09:09.027458  307731 command_runner.go:130] >       }
	I1202 21:09:09.027461  307731 command_runner.go:130] >     },
	I1202 21:09:09.027470  307731 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1202 21:09:09.027476  307731 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1202 21:09:09.027481  307731 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1202 21:09:09.027485  307731 command_runner.go:130] >     "disableApparmor": false,
	I1202 21:09:09.027490  307731 command_runner.go:130] >     "disableHugetlbController": true,
	I1202 21:09:09.027494  307731 command_runner.go:130] >     "disableProcMount": false,
	I1202 21:09:09.027499  307731 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1202 21:09:09.027503  307731 command_runner.go:130] >     "enableCDI": true,
	I1202 21:09:09.027507  307731 command_runner.go:130] >     "enableSelinux": false,
	I1202 21:09:09.027511  307731 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1202 21:09:09.027515  307731 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1202 21:09:09.027520  307731 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1202 21:09:09.027525  307731 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1202 21:09:09.027529  307731 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1202 21:09:09.027534  307731 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1202 21:09:09.027538  307731 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1202 21:09:09.027544  307731 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027548  307731 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1202 21:09:09.027554  307731 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027558  307731 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1202 21:09:09.027563  307731 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1202 21:09:09.027566  307731 command_runner.go:130] >   },
	I1202 21:09:09.027569  307731 command_runner.go:130] >   "features": {
	I1202 21:09:09.027574  307731 command_runner.go:130] >     "supplemental_groups_policy": true
	I1202 21:09:09.027577  307731 command_runner.go:130] >   },
	I1202 21:09:09.027581  307731 command_runner.go:130] >   "golang": "go1.24.9",
	I1202 21:09:09.027591  307731 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027600  307731 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027604  307731 command_runner.go:130] >   "runtimeHandlers": [
	I1202 21:09:09.027610  307731 command_runner.go:130] >     {
	I1202 21:09:09.027614  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027619  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027623  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027626  307731 command_runner.go:130] >       }
	I1202 21:09:09.027629  307731 command_runner.go:130] >     },
	I1202 21:09:09.027631  307731 command_runner.go:130] >     {
	I1202 21:09:09.027635  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027639  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027644  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027646  307731 command_runner.go:130] >       },
	I1202 21:09:09.027650  307731 command_runner.go:130] >       "name": "runc"
	I1202 21:09:09.027653  307731 command_runner.go:130] >     }
	I1202 21:09:09.027656  307731 command_runner.go:130] >   ],
	I1202 21:09:09.027659  307731 command_runner.go:130] >   "status": {
	I1202 21:09:09.027663  307731 command_runner.go:130] >     "conditions": [
	I1202 21:09:09.027666  307731 command_runner.go:130] >       {
	I1202 21:09:09.027670  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027673  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027677  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027681  307731 command_runner.go:130] >         "type": "RuntimeReady"
	I1202 21:09:09.027685  307731 command_runner.go:130] >       },
	I1202 21:09:09.027688  307731 command_runner.go:130] >       {
	I1202 21:09:09.027694  307731 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1202 21:09:09.027699  307731 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1202 21:09:09.027703  307731 command_runner.go:130] >         "status": false,
	I1202 21:09:09.027707  307731 command_runner.go:130] >         "type": "NetworkReady"
	I1202 21:09:09.027710  307731 command_runner.go:130] >       },
	I1202 21:09:09.027713  307731 command_runner.go:130] >       {
	I1202 21:09:09.027718  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027722  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027726  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027731  307731 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1202 21:09:09.027737  307731 command_runner.go:130] >       }
	I1202 21:09:09.027740  307731 command_runner.go:130] >     ]
	I1202 21:09:09.027743  307731 command_runner.go:130] >   }
	I1202 21:09:09.027746  307731 command_runner.go:130] > }
	I1202 21:09:09.029686  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:09.029710  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:09.029745  307731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:09.029776  307731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:09.029910  307731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:09.029985  307731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:09:09.036886  307731 command_runner.go:130] > kubeadm
	I1202 21:09:09.036909  307731 command_runner.go:130] > kubectl
	I1202 21:09:09.036915  307731 command_runner.go:130] > kubelet
	I1202 21:09:09.037789  307731 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:09.037851  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:09.045467  307731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:09:09.058043  307731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:09:09.070239  307731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:09:09.082241  307731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:09.085795  307731 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:09:09.086355  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:09.208713  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:09.542492  307731 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:09:09.542524  307731 certs.go:195] generating shared ca certs ...
	I1202 21:09:09.542541  307731 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:09.542698  307731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:09:09.542757  307731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:09:09.542770  307731 certs.go:257] generating profile certs ...
	I1202 21:09:09.542908  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:09:09.542989  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:09:09.543042  307731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:09:09.543063  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:09:09.543077  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:09:09.543095  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:09:09.543113  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:09:09.543136  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:09:09.543152  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:09:09.543163  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:09:09.543181  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:09:09.543248  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:09:09.543300  307731 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:09:09.543314  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:09.543356  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:09:09.543389  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:09.543418  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:09.543492  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:09.543552  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.543576  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.543600  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem -> /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.544214  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:09.562449  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:09.579657  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:09.597016  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:09:09.615077  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:09:09.633715  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:09:09.651379  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:09.669401  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:09:09.688777  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:09:09.706718  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:09.724108  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:09:09.741960  307731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:09.754915  307731 ssh_runner.go:195] Run: openssl version
	I1202 21:09:09.760531  307731 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:09:09.760935  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:09:09.769169  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772688  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772981  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.773081  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.818276  307731 command_runner.go:130] > 3ec20f2e
	I1202 21:09:09.818787  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:09:09.826520  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:09.834827  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838656  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838686  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838739  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.879212  307731 command_runner.go:130] > b5213941
	I1202 21:09:09.879657  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:09.887484  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:09:09.895881  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899623  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899669  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899717  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.940074  307731 command_runner.go:130] > 51391683
	I1202 21:09:09.940525  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:09:09.948324  307731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951828  307731 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951867  307731 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:09:09.951875  307731 command_runner.go:130] > Device: 259,1	Inode: 1305405     Links: 1
	I1202 21:09:09.951881  307731 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:09.951888  307731 command_runner.go:130] > Access: 2025-12-02 21:05:02.335914079 +0000
	I1202 21:09:09.951894  307731 command_runner.go:130] > Modify: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951898  307731 command_runner.go:130] > Change: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951903  307731 command_runner.go:130] >  Birth: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951997  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:09:09.992474  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:09.992586  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:09:10.044870  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.045432  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:09:10.090412  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.091042  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:09:10.132690  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.133145  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:09:10.173976  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.174453  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:09:10.215639  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.216098  307731 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:10.216220  307731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:10.216321  307731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:10.242158  307731 cri.go:89] found id: ""
	I1202 21:09:10.242234  307731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:10.249118  307731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:09:10.249140  307731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:09:10.249151  307731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:09:10.250041  307731 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:09:10.250060  307731 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:09:10.250140  307731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:09:10.257350  307731 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:09:10.257790  307731 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.257903  307731 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753958" cluster setting kubeconfig missing "functional-753958" context setting]
	I1202 21:09:10.258244  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.258662  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.258838  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.259364  307731 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:09:10.259381  307731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:09:10.259386  307731 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:09:10.259392  307731 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:09:10.259397  307731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:09:10.259441  307731 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:09:10.259684  307731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:09:10.267575  307731 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:09:10.267606  307731 kubeadm.go:602] duration metric: took 17.540251ms to restartPrimaryControlPlane
	I1202 21:09:10.267616  307731 kubeadm.go:403] duration metric: took 51.535685ms to StartCluster
	I1202 21:09:10.267631  307731 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.267694  307731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.268283  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.268485  307731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:09:10.268816  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:10.268866  307731 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:09:10.268984  307731 addons.go:70] Setting storage-provisioner=true in profile "functional-753958"
	I1202 21:09:10.269003  307731 addons.go:239] Setting addon storage-provisioner=true in "functional-753958"
	I1202 21:09:10.269024  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.269023  307731 addons.go:70] Setting default-storageclass=true in profile "functional-753958"
	I1202 21:09:10.269176  307731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-753958"
	I1202 21:09:10.269690  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.269905  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.274878  307731 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:10.279673  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:10.309974  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.310183  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.310507  307731 addons.go:239] Setting addon default-storageclass=true in "functional-753958"
	I1202 21:09:10.310544  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.311034  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.322713  307731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:10.325707  307731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.325729  307731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:10.325795  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.357829  307731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:10.357850  307731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:10.357914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.371695  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.400329  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.499296  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:10.516631  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.547824  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.294654  307731 node_ready.go:35] waiting up to 6m0s for node "functional-753958" to be "Ready" ...
	I1202 21:09:11.294774  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.294779  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.294839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.295227  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.295315  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295463  307731 retry.go:31] will retry after 210.924688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:11.295364  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295550  307731 retry.go:31] will retry after 203.437895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.500110  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:11.506791  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.578640  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.581915  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.581967  307731 retry.go:31] will retry after 400.592485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595609  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.595676  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595708  307731 retry.go:31] will retry after 422.737023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.794907  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:11.982828  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.018958  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.086246  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.086287  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.086307  307731 retry.go:31] will retry after 564.880189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117100  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.117143  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117191  307731 retry.go:31] will retry after 637.534191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.295409  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.295483  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.295805  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.652365  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.710471  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.710580  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.710622  307731 retry.go:31] will retry after 876.325619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.755731  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.795162  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.795277  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.795599  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.835060  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.835099  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.835118  307731 retry.go:31] will retry after 1.227832404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.295855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.295948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.296269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:13.296338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:13.587806  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:13.646676  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:13.646721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.646742  307731 retry.go:31] will retry after 1.443838067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.795158  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.795236  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.795586  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.064081  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:14.123819  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:14.127173  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.127215  307731 retry.go:31] will retry after 1.221247817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.295601  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.295675  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.295968  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.795874  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.796179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.091734  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:15.151479  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.151525  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.151546  307731 retry.go:31] will retry after 1.850953854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.349587  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:15.413525  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.416721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.416752  307731 retry.go:31] will retry after 1.691274377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.795307  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.795621  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:15.795696  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:16.295456  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.295552  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.295874  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:16.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.795755  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.796091  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.003193  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:17.061077  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.064289  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.064321  307731 retry.go:31] will retry after 2.076549374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.108496  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:17.168660  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.168709  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.168731  307731 retry.go:31] will retry after 3.158627903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.295738  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.295812  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.296081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.794893  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:18.294955  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.295057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.295390  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:18.295447  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:18.795090  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.795156  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.795510  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.141123  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:19.199068  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:19.202437  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.202469  307731 retry.go:31] will retry after 2.729492901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.295833  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.295905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.296241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.795344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:20.295255  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.295325  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.295687  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:20.295737  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:20.327882  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:20.391902  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:20.391939  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.391960  307731 retry.go:31] will retry after 4.367650264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.795532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.795920  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.295837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.295923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.296260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.932718  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:21.990698  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:21.990736  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:21.990761  307731 retry.go:31] will retry after 5.196584204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:22.295359  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.295443  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.295788  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:22.295845  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:22.795464  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.795562  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.795917  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.295669  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.295739  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.296001  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.795753  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.796151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.295815  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.295890  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.296207  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:24.296265  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:24.759924  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:24.795570  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.795642  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.795905  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.817214  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:24.821374  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:24.821411  307731 retry.go:31] will retry after 3.851570628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:25.294967  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.295041  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:25.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.295350  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.295431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.795297  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.795366  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.795685  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:26.795740  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:27.188447  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.254238  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:27.254282  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.254304  307731 retry.go:31] will retry after 6.785596085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.295437  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.295865  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:27.794985  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.795057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.294999  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.295384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.674112  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:28.734788  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:28.734834  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.734853  307731 retry.go:31] will retry after 5.470614597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:29.295607  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.295683  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.296024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:29.296105  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:29.794837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.795239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.295136  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.295232  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.295003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.795007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.795289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:31.795338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:32.295580  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.295653  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.295944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:32.795804  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.796241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.294972  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.295049  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.794828  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.794899  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.795152  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:34.040709  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:34.103827  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.103870  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.103890  307731 retry.go:31] will retry after 13.233422448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.206146  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:34.265937  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.265992  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.266011  307731 retry.go:31] will retry after 9.178751123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.295270  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.295377  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.295751  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:34.295808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:34.795590  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.795669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.295449  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.295792  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.795609  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.795985  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.295235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.795205  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.795285  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.795563  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:36.795617  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:37.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:37.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.294875  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.295216  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.794960  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:39.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.295474  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:39.295528  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:39.795755  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.795827  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.796097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.295759  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.295831  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.296122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.794848  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.794921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.795244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.294965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.295255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.794953  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.795034  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:41.795415  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:42.295127  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.295208  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.295661  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:42.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.795105  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.294952  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.295026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.445783  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:43.508150  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:43.508187  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.508208  307731 retry.go:31] will retry after 18.255533178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.795638  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.795730  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:43.796132  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:44.295329  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.295407  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.295673  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:44.795488  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.795564  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.795884  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.295740  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.295822  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.795177  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:46.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.295418  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:46.295474  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:46.795131  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.795532  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.295250  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.295339  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.295611  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.337905  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:47.398412  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:47.398459  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.398478  307731 retry.go:31] will retry after 28.802230035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.794980  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.795304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:48.795347  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:49.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.295302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:49.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.295374  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.295672  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.795457  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.795527  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.795850  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:50.795908  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:51.295904  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.295977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:51.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.794969  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.795305  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.295341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:53.295540  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.295885  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:53.295930  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:53.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.295722  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.296147  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.795496  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:55.295532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.295606  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:55.295984  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:55.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.795835  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.796153  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.294998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.295324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.295124  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.295200  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.295537  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.795227  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.795291  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.795605  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:57.795689  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:58.295413  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.295489  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.295818  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:58.795617  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.295296  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.295368  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.295623  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.794983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.795300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:00.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.295398  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.295706  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:00.295756  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:00.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.795832  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.796237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.295081  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.295430  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.763971  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:01.794859  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.794929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.795196  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.835916  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:01.839908  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:01.839940  307731 retry.go:31] will retry after 30.677466671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:02.295717  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.295826  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.296209  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:02.296289  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:02.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.795406  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.295176  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.295453  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.794940  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.795026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.795356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.295114  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.295196  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.295536  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:04.796171  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:05.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.294934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.295264  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:05.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.295514  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.295601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.295881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.795664  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.795741  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.796081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:07.294800  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.294876  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:07.295261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:07.795446  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.795518  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.795780  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.295543  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.295937  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.795803  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.795884  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.796321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:09.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:09.295304  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:09.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.795434  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.295294  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.295369  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.295705  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.795493  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.795577  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.795953  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:11.295781  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.295870  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:11.296268  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:11.794950  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.795368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.295128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.294954  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.795197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:13.795238  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:14.294945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.295037  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.295425  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:14.795147  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.795224  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.795562  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.295257  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.295338  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.295612  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:15.795380  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:16.200937  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:16.256562  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:16.259927  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.259959  307731 retry.go:31] will retry after 18.923209073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.295107  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.295189  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.295558  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:16.794811  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.295260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.795031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:18.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.295206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:18.295258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:18.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.295370  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.794970  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:20.295271  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.295345  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.295682  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:20.295746  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:20.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.795586  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.795908  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.295457  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.295714  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.795556  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.795949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:22.295726  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.296133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:22.296198  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:22.795455  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.795537  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.795801  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.295679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.296049  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.795807  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.796143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.294826  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.294902  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.295188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.795928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.796234  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:24.796284  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:25.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.294948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:25.794855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.795034  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.795438  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:27.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:27.295395  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:27.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.794984  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:29.294923  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:29.295418  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:29.795094  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.795169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.795504  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.295472  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.295550  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.295841  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.795750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.796084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.294839  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.295203  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.794808  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:31.795189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:32.294872  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:32.517612  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:32.588466  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591823  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591933  307731 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:10:32.795459  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.795532  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.795852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.295131  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.295202  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.794891  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.794962  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:33.795314  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:34.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:34.795003  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.795074  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.795374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.183965  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:35.239016  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:35.242188  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.242221  307731 retry.go:31] will retry after 25.961571555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.295555  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.295639  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.295975  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.796134  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:35.796175  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:36.295019  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.295091  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.295347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:36.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.795019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.295060  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.795743  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.795817  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:38.295875  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.295951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.296303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:38.296363  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:38.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.295633  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.295705  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.795820  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.795894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:40.295848  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.295936  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.296337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:40.296429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:40.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.795169  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.294994  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.795377  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.295192  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.795316  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.795641  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:42.795694  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:43.295520  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.295594  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:43.795644  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.795714  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.795981  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.295768  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.295846  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.296173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.794885  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:45.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:45.295340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:45.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.295077  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.295153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.795257  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.795513  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.795380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:47.795437  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:48.295512  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.295579  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.295842  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:48.795623  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.795698  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.295731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.296154  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.795443  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.795545  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.795873  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:49.795941  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:50.295652  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.295726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.296078  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:50.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.795808  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.796159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.295466  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.295534  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.295787  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.795602  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.795679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.796007  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:51.796073  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:52.295850  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:52.794970  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.795045  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.294905  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.794897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.794971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:54.295102  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.295441  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:54.295539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:54.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.295052  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.795785  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.795851  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.796131  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.295386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.795230  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.795573  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:56.795626  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:57.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.294906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.295200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:57.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.795292  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.294897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.294972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.795026  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.795092  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:59.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.295353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:59.295412  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:59.795027  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.795102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.795393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.298236  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.298341  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.298735  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.795120  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.795194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:01.204061  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:11:01.267039  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267090  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267174  307731 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:11:01.270170  307731 out.go:179] * Enabled addons: 
	I1202 21:11:01.273921  307731 addons.go:530] duration metric: took 1m51.005043213s for enable addons: enabled=[]
	I1202 21:11:01.295263  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.295359  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.295653  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:01.295706  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:01.795541  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.795971  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.295791  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.295861  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.294886  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.795031  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.795108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.795398  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:03.795445  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:04.295135  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.295207  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.295489  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:04.797772  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.797855  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.798166  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.795024  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.795114  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.795840  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:05.795891  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:06.295375  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.295699  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:06.795562  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.795637  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.295853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.795390  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.795462  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.795723  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:08.295538  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.295622  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.295961  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:08.296019  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:08.795765  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.795839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.796212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.295353  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.295424  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.295732  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.795220  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.795301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.795760  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:10.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.295830  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:10.296275  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:10.794847  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.794927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.795204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.295063  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.295142  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.295478  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.795260  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.795582  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.294899  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.295257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:12.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:13.295067  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.295150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.295484  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:13.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:15.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.295024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:15.295317  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:15.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.794897  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.295258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:17.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:18.294837  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.295270  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:18.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.295034  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.295446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.795197  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.795502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:19.795550  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:20.295498  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.295582  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.295890  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:20.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.795745  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.796070  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.294797  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.294862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.295106  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.795856  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.795927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.796206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:21.796258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.295002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.295336  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:22.795444  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.795821  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.295637  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.295716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.296030  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.795816  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.795911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.796220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:24.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.295400  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:24.295449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:24.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.795056  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.294949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.295023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.295327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.795726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.795991  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.294874  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.295297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.795064  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:26.795449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:27.295101  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.295170  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.295451  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:27.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.795354  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.795573  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.795646  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:28.795938  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:29.295736  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.296135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:29.794877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.295169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:31.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:31.295398  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:31.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.795188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.294898  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.295108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:33.795429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:34.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.294989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:34.795011  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.795087  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.795342  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.295337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.795066  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.795146  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.795473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:35.795529  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:36.295315  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.295394  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.295654  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:36.795469  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.795546  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.795881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.295777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.296183  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.795356  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.795431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.795698  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:37.795750  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:38.295449  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.295517  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:38.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.795731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.295366  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.295436  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.295758  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.795668  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:39.796055  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:40.294852  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.295284  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:40.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.794934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.795237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.295084  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.295163  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.295481  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:42.295575  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.295656  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.295978  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:42.296030  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:42.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.795869  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.796202  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.295844  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.295922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.296257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.795435  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.795509  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.795804  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:44.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.295700  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.296029  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:44.296112  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:44.794813  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.794887  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.295025  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.295309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.295180  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.295255  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.295594  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.795733  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.795806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:46.796126  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:47.294799  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.294879  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.295242  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:47.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.795217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.295217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.795020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:49.294951  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.295028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:49.295407  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:49.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.795752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.796093  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.295858  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.296181  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.795022  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.795327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.294892  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.294961  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.795369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:51.795425  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:52.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.295183  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:52.795678  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.796004  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.295812  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.295892  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.296208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.795862  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.795942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.796296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:53.796344  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:54.294832  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.295145  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:54.794887  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.794967  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.795291  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.294921  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.295281  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.795485  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.795558  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.795809  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:56.295721  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.295797  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.296098  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:56.296148  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:56.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.295605  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.295673  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.295938  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.795802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.796121  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.294836  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.294913  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.794825  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.794896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:58.795190  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:59.294877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.295267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:59.794990  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.795067  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.795410  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.308704  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.308789  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.309104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.794956  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:00.795332  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:01.294968  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.295473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:01.794941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.294948  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.295043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:02.795477  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:03.295203  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.295281  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.295626  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:03.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.795507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.795802  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.295252  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.295319  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.295618  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.795525  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.795601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:04.796063  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:05.295831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.295911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:05.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.794932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.795240  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.294953  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.295030  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.295362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.795000  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.795417  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:07.294864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.295204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:07.295255  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:07.794954  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.295005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.795520  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.795777  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:09.295573  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.295651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.295959  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:09.296007  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:09.795632  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.795716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.295748  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.295818  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.795857  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.795938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.796244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.294935  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.795294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:11.795346  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:12.294901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.294921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.295173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:13.795372  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:14.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:14.795456  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.795525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.795875  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.295665  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.295750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.296104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.795269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:16.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.294937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.295192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:16.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:16.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.795044  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.295380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.796052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:18.295868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.295939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.296239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:18.296278  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:18.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.794946  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.795296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.295053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.794942  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.295100  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.795156  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.795229  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:20.795539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:21.295446  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.295525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.295851  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:21.795679  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.795753  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.796086  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.295843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.295924  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.296179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.794901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:23.294961  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.295032  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.295330  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:23.295374  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:23.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.795578  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.795879  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.295535  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.295609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.295949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.795766  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.795904  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.796239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.294881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.295138  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.794905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.795241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:25.795296  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:26.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:26.795516  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.795596  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.795868  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.295674  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.295752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.795851  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.795930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.796225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:27.796269  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:28.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.295262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:28.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.294980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.295344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.795043  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.795431  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:30.295491  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.295854  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:30.295900  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:30.795622  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.795701  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.295224  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.295021  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.295094  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.295426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.795261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:32.795313  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:33.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.294988  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.295274  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:33.794936  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.294978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.295357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:34.795369  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:35.294958  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:35.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.795262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.294964  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.295312  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.795319  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:37.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.294923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:37.295279  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:37.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.795062  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.295093  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.295216  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.295502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.795751  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.795829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.796083  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.295189  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.794951  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:39.795360  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:40.295208  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.295278  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.295547  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:40.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.795303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.295146  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.295226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.295541  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.795842  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.795912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.796200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:41.796251  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:42.294960  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.295487  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:42.795065  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.795138  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.295783  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.794833  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:44.294965  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.295055  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.295393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:44.295450  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:44.794874  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.794942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.295214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.295767  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.795233  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.795311  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.795638  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:46.295163  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.295245  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.295588  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:46.295650  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:46.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.295090  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.295497  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.794869  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.294861  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.295271  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.794940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:48.795342  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:49.294843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.294911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.295164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:49.794846  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.794949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.295060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.795576  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.795648  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.795900  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:50.795939  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:51.295852  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.295925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.296265  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:51.794918  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.795350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.294960  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.295280  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.795353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:53.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.295126  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.295420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:53.295466  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:53.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.794894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.295276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.795118  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.294835  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.294908  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.295159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.794871  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.794955  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:55.795414  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:56.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.295014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.295303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:56.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.794965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:58.294803  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.294871  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.295161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:58.295224  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:58.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.795009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.795298  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.295027  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.295104  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.794850  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.795190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:00.295830  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.295907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.296237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:00.296286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:00.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.295254  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.795411  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.795414  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.795493  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:02.795808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:03.295626  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.295706  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.296056  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:03.795867  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.795947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.796294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.294876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.295212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.794889  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.795297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:05.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.295111  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.295416  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:05.295461  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:05.795108  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.295448  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.295528  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.296185  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.794905  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.795346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:07.295651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.295719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.296051  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:07.296110  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:07.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.795926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.796263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.294869  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.795548  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.795627  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.795895  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:09.295682  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.295756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.296097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:09.296151  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:09.794843  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.795258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.295332  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.295413  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.795553  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.796008  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:11.295865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.295935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.296253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:11.296301  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:11.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.796123  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.795041  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.795119  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.795456  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.295760  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.296010  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.795805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.796135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:13.796187  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:14.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:14.795004  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.795086  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.794965  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.795420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:16.294820  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:16.295299  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:16.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.795324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.795483  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.795554  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.795826  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:18.295595  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.295669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.296052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:18.296108  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:18.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.795799  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.796125  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.295390  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.295507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.295770  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.795944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:20.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.295849  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.296214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:20.296270  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:20.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.795888  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.295858  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.296299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.795128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.795467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.295167  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.295249  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:22.795386  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:23.294912  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.295388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:23.795649  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.795757  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.295857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.295930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.296228  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.794835  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.795214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:25.294914  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.295261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:25.295309  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:25.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.795129  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.795387  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:27.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.295010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:27.295406  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:27.795057  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.795135  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.294926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.295180  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.795601  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.795676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.796027  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:29.295836  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.295912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.296231  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:29.296292  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:29.794829  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.794900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.795151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.295730  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.296126  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.794915  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.795249  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:31.297776  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.297853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.298178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:31.298228  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:31.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.295433  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.795735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.795800  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.796165  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.295304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.795016  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.795321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:33.795370  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:34.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.295184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:34.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.794945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.294879  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.294959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.794925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.795178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:36.294917  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.294991  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:36.295373  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:36.795049  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.295735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.295805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.296066  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.795800  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.795873  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.796213  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:38.295712  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.295790  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.296136  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:38.296189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:38.794842  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.795163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.294845  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.294918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.295048  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.295117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.794900  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:40.795390  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:41.294896  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.294974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.295282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:41.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.795060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.795375  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.295106  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.295194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.794935  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.795335  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:43.294846  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.294916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.295163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:43.295211  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:43.794883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.795651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.795720  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.795982  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:45.295757  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.295838  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.296285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:45.296345  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:45.795035  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.795117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.795459  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.295269  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.295336  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.794865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.795193  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:47.795233  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:48.294918  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:48.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.795165  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.795501  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.295207  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.295288  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.295554  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.795252  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.795322  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.795632  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:49.795684  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:50.295531  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.295604  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.295957  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:50.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.795608  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.796073  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.295825  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.296243  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.795338  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:52.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.294945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.295299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:52.295371  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:52.794884  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.295056  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.295127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.295442  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.794868  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.795222  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.294840  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.294920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.795316  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:54.795366  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:55.295563  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.295904  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:55.795697  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.795777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.796113  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.294907  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.794944  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.795238  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:57.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.295369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:57.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:57.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.795051  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.294806  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.294875  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.295122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.794910  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.795229  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.295361  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.795054  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.795386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:59.795436  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:00.295787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.295877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:00.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.795357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.294857  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.295226  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.795098  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.795437  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:01.795499  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:02.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.295413  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:02.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.795756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.796018  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.295834  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.295906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.296221  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:04.295602  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.295676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.296005  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:04.296061  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:04.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.795363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.295123  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.295457  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.795144  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.295374  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.295743  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.795004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.795340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:06.795401  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:07.295611  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.295678  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:07.795672  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.795746  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.796102  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.295447  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.295852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.795225  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.795296  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.795548  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:08.795589  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:09.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.295329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:09.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.295213  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.295283  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.295555  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.794913  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.794989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.795326  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:11.294894  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.294973  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.295333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:11.295391  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:11.794858  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.794926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.795184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.794998  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.795409  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:13.295664  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.295731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:13.296034  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:13.795754  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.294862  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.795651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.795954  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:15.295756  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.295834  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.296219  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:15.296293  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.794990  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.294940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.795371  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.295083  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.295533  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.795809  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.795877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.796133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:17.796172  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:18.294860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:18.794860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.295036  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.295289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.794925  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.795302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:20.295739  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.296151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:20.296213  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:20.795440  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.795763  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.295657  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.295765  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.296103  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.795787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.795862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.796230  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.294978  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:22.795368  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:23.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.295000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.295323  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:23.795654  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.795728  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.795993  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.295761  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.295839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.296161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.794986  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:25.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.294935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.295190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:25.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:25.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.295020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.295383  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.795713  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.795787  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.796101  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:27.294827  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.294901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:27.295286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:27.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.794916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.294946  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.295278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.795366  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:29.295050  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:29.295536  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:29.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.795842  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.796119  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.295026  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.295100  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.295424  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.795136  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.795210  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:31.295356  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.295420  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.295666  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:31.295705  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:31.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.795523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.295545  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.295621  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.295915  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.795216  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.795294  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.795032  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.795113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.795460  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:33.795521  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:34.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.295175  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:34.794878  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.794952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.795309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.295020  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.295113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.295444  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.795727  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.795796  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.796110  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:35.796169  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:36.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.295256  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:36.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.795012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.794972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:38.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:38.295411  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:38.795081  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.795152  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.295318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.795068  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:40.295494  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:40.295880  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:40.795619  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.795692  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.796024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.795647  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.795719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:42.295806  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.295896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.296282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:42.296340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:42.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.795349  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.295078  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.295167  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.295472  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.794967  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.795039  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.295072  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.295155  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.795154  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.795226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.795482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:44.795526  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:45.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.295173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.295911  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:45.795805  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.796164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.795381  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:47.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:47.295354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:47.794995  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.795072  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.294900  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.794879  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.794954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.294865  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.795293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:49.795354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:50.295309  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.295381  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.295715  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:50.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.295008  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.295359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.795069  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:51.795574  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:52.295227  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.295298  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.295552  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:52.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.295449  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.795218  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:54.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:54.295377  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:54.795076  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.795150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.295170  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.295241  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.295544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:56.295064  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.295148  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.295496  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:56.295551  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:56.795788  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.795881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.796235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.295029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.795157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.795491  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:58.295762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.295829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.296084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:58.296124  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:58.795828  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.795901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.796192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.294890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.294971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.795662  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.294841  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.295288  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:00.795520  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:01.295565  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:01.795669  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.794984  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.795058  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.795384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:03.294942  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:03.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:03.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.295082  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.295157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.295429  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.795043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.795426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.795116  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.795195  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.795515  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:05.795575  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:06.295370  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.295451  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.295771  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:06.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.795617  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.795962  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.295701  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.295775  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.296023  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.795795  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.795872  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:07.796261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:08.294940  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:08.794862  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.794931  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.295352  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.795162  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.795514  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:10.299197  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.299301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.299703  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:10.299761  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:10.795524  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.795615  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:11.294844  307731 node_ready.go:38] duration metric: took 6m0.000140797s for node "functional-753958" to be "Ready" ...
	I1202 21:15:11.298019  307731 out.go:203] 
	W1202 21:15:11.300907  307731 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:15:11.300927  307731 out.go:285] * 
	W1202 21:15:11.303086  307731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:15:11.306181  307731 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:18 functional-753958 containerd[5832]: time="2025-12-02T21:15:18.696151478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.710675008Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.713009436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.720983824Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.721785211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.649936722Z" level=info msg="No images store for sha256:d59db7295a44a54f2e51ebe8901f849af948acf4a9ad318dd4f11a213e39357b"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.652234655Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-753958\""
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.658987271Z" level=info msg="ImageCreate event name:\"sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.659306465Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.473336839Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.475778775Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.477921003Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.489964520Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.443219933Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.452940279Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.455391709Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.472181387Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.570974770Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.573211716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.582176007Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.582709012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.697261746Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.699334036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.711113544Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.711439869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:15:24.458087    9794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:24.458937    9794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:24.460823    9794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:24.461439    9794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:24.463109    9794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:15:24 up  2:57,  0 user,  load average: 0.77, 0.45, 0.91
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:15:20 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:21 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 822.
	Dec 02 21:15:21 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:21 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:21 functional-753958 kubelet[9545]: E1202 21:15:21.560877    9545 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:21 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:21 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:22 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 823.
	Dec 02 21:15:22 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:22 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:22 functional-753958 kubelet[9583]: E1202 21:15:22.345845    9583 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:22 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:22 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:22 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 824.
	Dec 02 21:15:22 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:22 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:23 functional-753958 kubelet[9689]: E1202 21:15:23.067685    9689 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 825.
	Dec 02 21:15:23 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:23 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:23 functional-753958 kubelet[9710]: E1202 21:15:23.851691    9710 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (345.568564ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-753958 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-753958 get pods: exit status 1 (104.865447ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-753958 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (331.418945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 logs -n 25: (1.004656154s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446665 image ls --format yaml --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format short --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format json --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format table --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh     │ functional-446665 ssh pgrep buildkitd                                                                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image   │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                  │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls                                                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete  │ -p functional-446665                                                                                                                                    │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start   │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start   │ -p functional-753958 --alsologtostderr -v=8                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:latest                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add minikube-local-cache-test:functional-753958                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache delete minikube-local-cache-test:functional-753958                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl images                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ cache   │ functional-753958 cache reload                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ kubectl │ functional-753958 kubectl -- --context functional-753958 get pods                                                                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:09:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:09:05.869127  307731 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:09:05.869342  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.869372  307731 out.go:374] Setting ErrFile to fd 2...
	I1202 21:09:05.869392  307731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:09:05.870120  307731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:09:05.870642  307731 out.go:368] Setting JSON to false
	I1202 21:09:05.871532  307731 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10284,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:09:05.871698  307731 start.go:143] virtualization:  
	I1202 21:09:05.875240  307731 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:09:05.878196  307731 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:09:05.878269  307731 notify.go:221] Checking for updates...
	I1202 21:09:05.884072  307731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:09:05.886942  307731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:05.889899  307731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:09:05.892813  307731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:09:05.895771  307731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:09:05.899217  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:05.899365  307731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:09:05.932799  307731 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:09:05.932919  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:05.993966  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:05.984741651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:05.994072  307731 docker.go:319] overlay module found
	I1202 21:09:05.997248  307731 out.go:179] * Using the docker driver based on existing profile
	I1202 21:09:06.000038  307731 start.go:309] selected driver: docker
	I1202 21:09:06.000060  307731 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.000154  307731 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:09:06.000264  307731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:09:06.066709  307731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:09:06.057768194 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:09:06.067144  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:06.067209  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:06.067263  307731 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:06.070421  307731 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:09:06.073261  307731 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:09:06.078117  307731 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:09:06.080953  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:06.081041  307731 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:09:06.101516  307731 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:09:06.101541  307731 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:09:06.138751  307731 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:09:06.314468  307731 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:09:06.314628  307731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:09:06.314753  307731 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314852  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:09:06.314868  307731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 127.02µs
	I1202 21:09:06.314884  307731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:09:06.314900  307731 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.314935  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:09:06.314945  307731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.735µs
	I1202 21:09:06.314952  307731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:09:06.314968  307731 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315000  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:09:06.315009  307731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.764µs
	I1202 21:09:06.315016  307731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315030  307731 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315059  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:09:06.315069  307731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 39.875µs
	I1202 21:09:06.315075  307731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315089  307731 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315119  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:09:06.315127  307731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 38.629µs
	I1202 21:09:06.315144  307731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:09:06.315143  307731 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:09:06.315177  307731 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315202  307731 start.go:364] duration metric: took 13.3µs to acquireMachinesLock for "functional-753958"
	I1202 21:09:06.315219  307731 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:09:06.315230  307731 fix.go:54] fixHost starting: 
	I1202 21:09:06.315183  307731 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315307  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:09:06.315332  307731 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 153.571µs
	I1202 21:09:06.315357  307731 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:09:06.315387  307731 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315443  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:09:06.315465  307731 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 80.424µs
	I1202 21:09:06.315488  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:06.315527  307731 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:09:06.315588  307731 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:09:06.315619  307731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 95.488µs
	I1202 21:09:06.315640  307731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:09:06.315489  307731 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:09:06.315801  307731 cache.go:87] Successfully saved all images to host disk.
	I1202 21:09:06.333736  307731 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:09:06.333771  307731 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:09:06.337175  307731 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:09:06.337206  307731 machine.go:94] provisionDockerMachine start ...
	I1202 21:09:06.337301  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.354474  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.354810  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.354830  307731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:09:06.501197  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.501220  307731 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:09:06.501288  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.519375  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.519710  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.519727  307731 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:09:06.687724  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:09:06.687814  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:06.707419  307731 main.go:143] libmachine: Using SSH client type: native
	I1202 21:09:06.707758  307731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:09:06.707780  307731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:09:06.858340  307731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:09:06.858365  307731 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:09:06.858387  307731 ubuntu.go:190] setting up certificates
	I1202 21:09:06.858407  307731 provision.go:84] configureAuth start
	I1202 21:09:06.858472  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:06.877925  307731 provision.go:143] copyHostCerts
	I1202 21:09:06.877980  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878020  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:09:06.878036  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:09:06.878121  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:09:06.878219  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878244  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:09:06.878253  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:09:06.878283  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:09:06.878341  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878361  307731 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:09:06.878366  307731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:09:06.878392  307731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:09:06.878454  307731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:09:07.212788  307731 provision.go:177] copyRemoteCerts
	I1202 21:09:07.212871  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:09:07.212914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.229990  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.334622  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 21:09:07.334690  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:09:07.358156  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 21:09:07.358212  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:09:07.374829  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 21:09:07.374936  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:09:07.391856  307731 provision.go:87] duration metric: took 533.420534ms to configureAuth
	I1202 21:09:07.391883  307731 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:09:07.392075  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:07.392088  307731 machine.go:97] duration metric: took 1.054874904s to provisionDockerMachine
	I1202 21:09:07.392096  307731 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:09:07.392108  307731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:09:07.392158  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:09:07.392201  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.409892  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.513929  307731 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:09:07.517313  307731 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1202 21:09:07.517377  307731 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1202 21:09:07.517399  307731 command_runner.go:130] > VERSION_ID="12"
	I1202 21:09:07.517411  307731 command_runner.go:130] > VERSION="12 (bookworm)"
	I1202 21:09:07.517423  307731 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1202 21:09:07.517428  307731 command_runner.go:130] > ID=debian
	I1202 21:09:07.517432  307731 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1202 21:09:07.517437  307731 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1202 21:09:07.517460  307731 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1202 21:09:07.517505  307731 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:09:07.517555  307731 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:09:07.517574  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:09:07.517638  307731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:09:07.517741  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:09:07.517755  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /etc/ssl/certs/2632412.pem
	I1202 21:09:07.517830  307731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:09:07.517839  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> /etc/test/nested/copy/263241/hosts
	I1202 21:09:07.517882  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:09:07.525639  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:07.543648  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:09:07.560944  307731 start.go:296] duration metric: took 168.831988ms for postStartSetup
	I1202 21:09:07.561067  307731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:09:07.561116  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.579622  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.682695  307731 command_runner.go:130] > 12%
	I1202 21:09:07.682778  307731 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:09:07.687210  307731 command_runner.go:130] > 172G
	I1202 21:09:07.687707  307731 fix.go:56] duration metric: took 1.372471826s for fixHost
	I1202 21:09:07.687729  307731 start.go:83] releasing machines lock for "functional-753958", held for 1.372515567s
	I1202 21:09:07.687799  307731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:09:07.704780  307731 ssh_runner.go:195] Run: cat /version.json
	I1202 21:09:07.704833  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.704860  307731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:09:07.704931  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:07.726613  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.737148  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:07.829144  307731 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764169655-21974", "minikube_version": "v1.37.0", "commit": "5499406178e21d60d74d327c9716de794e8a4797"}
	I1202 21:09:07.829307  307731 ssh_runner.go:195] Run: systemctl --version
	I1202 21:09:07.919742  307731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1202 21:09:07.919788  307731 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1202 21:09:07.919811  307731 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1202 21:09:07.919883  307731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 21:09:07.924332  307731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1202 21:09:07.924495  307731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:09:07.924590  307731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:09:07.932451  307731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:09:07.932475  307731 start.go:496] detecting cgroup driver to use...
	I1202 21:09:07.932505  307731 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:09:07.932553  307731 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:09:07.947902  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:09:07.964330  307731 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:09:07.964400  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:09:07.980760  307731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:09:07.995134  307731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:09:08.122567  307731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:09:08.232585  307731 docker.go:234] disabling docker service ...
	I1202 21:09:08.232660  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:09:08.247806  307731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:09:08.260075  307731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:09:08.380227  307731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:09:08.498586  307731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:09:08.511975  307731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:09:08.525630  307731 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1202 21:09:08.525792  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:09:08.534331  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:09:08.543412  307731 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:09:08.543534  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:09:08.552561  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.561268  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:09:08.570127  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:09:08.578716  307731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:09:08.586804  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:09:08.595543  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:09:08.604412  307731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:09:08.613462  307731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:09:08.620008  307731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1202 21:09:08.621008  307731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:09:08.628262  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:08.744391  307731 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:09:08.864675  307731 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:09:08.864794  307731 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:09:08.868351  307731 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I1202 21:09:08.868411  307731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1202 21:09:08.868454  307731 command_runner.go:130] > Device: 0,72	Inode: 1612        Links: 1
	I1202 21:09:08.868480  307731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:08.868521  307731 command_runner.go:130] > Access: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868544  307731 command_runner.go:130] > Modify: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868569  307731 command_runner.go:130] > Change: 2025-12-02 21:09:08.840863455 +0000
	I1202 21:09:08.868599  307731 command_runner.go:130] >  Birth: -
	I1202 21:09:08.868892  307731 start.go:564] Will wait 60s for crictl version
	I1202 21:09:08.868989  307731 ssh_runner.go:195] Run: which crictl
	I1202 21:09:08.872054  307731 command_runner.go:130] > /usr/local/bin/crictl
	I1202 21:09:08.872553  307731 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:09:08.897996  307731 command_runner.go:130] > Version:  0.1.0
	I1202 21:09:08.898089  307731 command_runner.go:130] > RuntimeName:  containerd
	I1202 21:09:08.898120  307731 command_runner.go:130] > RuntimeVersion:  v2.1.5
	I1202 21:09:08.898152  307731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1202 21:09:08.900685  307731 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:09:08.900802  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.918917  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.920319  307731 ssh_runner.go:195] Run: containerd --version
	I1202 21:09:08.938561  307731 command_runner.go:130] > containerd containerd.io v2.1.5 fcd43222d6b07379a4be9786bda52438f0dd16a1
	I1202 21:09:08.945896  307731 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:09:08.948895  307731 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:09:08.964797  307731 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:09:08.968415  307731 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1202 21:09:08.968697  307731 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:09:08.968812  307731 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:09:08.968871  307731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:09:08.989960  307731 command_runner.go:130] > {
	I1202 21:09:08.989978  307731 command_runner.go:130] >   "images":  [
	I1202 21:09:08.989982  307731 command_runner.go:130] >     {
	I1202 21:09:08.989991  307731 command_runner.go:130] >       "id":  "sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51",
	I1202 21:09:08.989996  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990002  307731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1202 21:09:08.990005  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990009  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990013  307731 command_runner.go:130] >       "size":  "8032639",
	I1202 21:09:08.990018  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990022  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990025  307731 command_runner.go:130] >     },
	I1202 21:09:08.990027  307731 command_runner.go:130] >     {
	I1202 21:09:08.990039  307731 command_runner.go:130] >       "id":  "sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1202 21:09:08.990044  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990049  307731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1202 21:09:08.990052  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990057  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990066  307731 command_runner.go:130] >       "size":  "21166088",
	I1202 21:09:08.990071  307731 command_runner.go:130] >       "username":  "nonroot",
	I1202 21:09:08.990075  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990078  307731 command_runner.go:130] >     },
	I1202 21:09:08.990085  307731 command_runner.go:130] >     {
	I1202 21:09:08.990092  307731 command_runner.go:130] >       "id":  "sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1202 21:09:08.990096  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990101  307731 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1202 21:09:08.990104  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990108  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990112  307731 command_runner.go:130] >       "size":  "21134420",
	I1202 21:09:08.990116  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990120  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990123  307731 command_runner.go:130] >       },
	I1202 21:09:08.990126  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990130  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990133  307731 command_runner.go:130] >     },
	I1202 21:09:08.990136  307731 command_runner.go:130] >     {
	I1202 21:09:08.990143  307731 command_runner.go:130] >       "id":  "sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1202 21:09:08.990147  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990156  307731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1202 21:09:08.990159  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990163  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990167  307731 command_runner.go:130] >       "size":  "24676285",
	I1202 21:09:08.990170  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990175  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990178  307731 command_runner.go:130] >       },
	I1202 21:09:08.990182  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990189  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990192  307731 command_runner.go:130] >     },
	I1202 21:09:08.990195  307731 command_runner.go:130] >     {
	I1202 21:09:08.990202  307731 command_runner.go:130] >       "id":  "sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1202 21:09:08.990206  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990213  307731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1202 21:09:08.990216  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990220  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990224  307731 command_runner.go:130] >       "size":  "20658969",
	I1202 21:09:08.990227  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990231  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990233  307731 command_runner.go:130] >       },
	I1202 21:09:08.990237  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990241  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990244  307731 command_runner.go:130] >     },
	I1202 21:09:08.990246  307731 command_runner.go:130] >     {
	I1202 21:09:08.990253  307731 command_runner.go:130] >       "id":  "sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1202 21:09:08.990257  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990262  307731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1202 21:09:08.990265  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990269  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990273  307731 command_runner.go:130] >       "size":  "22428165",
	I1202 21:09:08.990277  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990280  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990283  307731 command_runner.go:130] >     },
	I1202 21:09:08.990287  307731 command_runner.go:130] >     {
	I1202 21:09:08.990293  307731 command_runner.go:130] >       "id":  "sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1202 21:09:08.990297  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990302  307731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1202 21:09:08.990305  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990314  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990318  307731 command_runner.go:130] >       "size":  "15389290",
	I1202 21:09:08.990322  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990329  307731 command_runner.go:130] >         "value":  "0"
	I1202 21:09:08.990332  307731 command_runner.go:130] >       },
	I1202 21:09:08.990336  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990339  307731 command_runner.go:130] >       "pinned":  false
	I1202 21:09:08.990342  307731 command_runner.go:130] >     },
	I1202 21:09:08.990345  307731 command_runner.go:130] >     {
	I1202 21:09:08.990352  307731 command_runner.go:130] >       "id":  "sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1202 21:09:08.990356  307731 command_runner.go:130] >       "repoTags":  [
	I1202 21:09:08.990361  307731 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1202 21:09:08.990364  307731 command_runner.go:130] >       ],
	I1202 21:09:08.990371  307731 command_runner.go:130] >       "repoDigests":  [],
	I1202 21:09:08.990375  307731 command_runner.go:130] >       "size":  "265458",
	I1202 21:09:08.990379  307731 command_runner.go:130] >       "uid":  {
	I1202 21:09:08.990383  307731 command_runner.go:130] >         "value":  "65535"
	I1202 21:09:08.990386  307731 command_runner.go:130] >       },
	I1202 21:09:08.990389  307731 command_runner.go:130] >       "username":  "",
	I1202 21:09:08.990393  307731 command_runner.go:130] >       "pinned":  true
	I1202 21:09:08.990396  307731 command_runner.go:130] >     }
	I1202 21:09:08.990402  307731 command_runner.go:130] >   ]
	I1202 21:09:08.990404  307731 command_runner.go:130] > }
	I1202 21:09:08.992021  307731 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:09:08.992044  307731 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:09:08.992052  307731 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:09:08.992155  307731 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:09:08.992222  307731 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:09:09.027109  307731 command_runner.go:130] > {
	I1202 21:09:09.027127  307731 command_runner.go:130] >   "cniconfig": {
	I1202 21:09:09.027132  307731 command_runner.go:130] >     "Networks": [
	I1202 21:09:09.027136  307731 command_runner.go:130] >       {
	I1202 21:09:09.027142  307731 command_runner.go:130] >         "Config": {
	I1202 21:09:09.027146  307731 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I1202 21:09:09.027151  307731 command_runner.go:130] >           "Name": "cni-loopback",
	I1202 21:09:09.027155  307731 command_runner.go:130] >           "Plugins": [
	I1202 21:09:09.027164  307731 command_runner.go:130] >             {
	I1202 21:09:09.027168  307731 command_runner.go:130] >               "Network": {
	I1202 21:09:09.027172  307731 command_runner.go:130] >                 "ipam": {},
	I1202 21:09:09.027178  307731 command_runner.go:130] >                 "type": "loopback"
	I1202 21:09:09.027181  307731 command_runner.go:130] >               },
	I1202 21:09:09.027186  307731 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I1202 21:09:09.027189  307731 command_runner.go:130] >             }
	I1202 21:09:09.027193  307731 command_runner.go:130] >           ],
	I1202 21:09:09.027203  307731 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I1202 21:09:09.027207  307731 command_runner.go:130] >         },
	I1202 21:09:09.027212  307731 command_runner.go:130] >         "IFName": "lo"
	I1202 21:09:09.027215  307731 command_runner.go:130] >       }
	I1202 21:09:09.027218  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027223  307731 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I1202 21:09:09.027227  307731 command_runner.go:130] >     "PluginDirs": [
	I1202 21:09:09.027230  307731 command_runner.go:130] >       "/opt/cni/bin"
	I1202 21:09:09.027234  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027238  307731 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I1202 21:09:09.027242  307731 command_runner.go:130] >     "Prefix": "eth"
	I1202 21:09:09.027245  307731 command_runner.go:130] >   },
	I1202 21:09:09.027248  307731 command_runner.go:130] >   "config": {
	I1202 21:09:09.027252  307731 command_runner.go:130] >     "cdiSpecDirs": [
	I1202 21:09:09.027256  307731 command_runner.go:130] >       "/etc/cdi",
	I1202 21:09:09.027259  307731 command_runner.go:130] >       "/var/run/cdi"
	I1202 21:09:09.027263  307731 command_runner.go:130] >     ],
	I1202 21:09:09.027266  307731 command_runner.go:130] >     "cni": {
	I1202 21:09:09.027269  307731 command_runner.go:130] >       "binDir": "",
	I1202 21:09:09.027273  307731 command_runner.go:130] >       "binDirs": [
	I1202 21:09:09.027277  307731 command_runner.go:130] >         "/opt/cni/bin"
	I1202 21:09:09.027280  307731 command_runner.go:130] >       ],
	I1202 21:09:09.027285  307731 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I1202 21:09:09.027289  307731 command_runner.go:130] >       "confTemplate": "",
	I1202 21:09:09.027292  307731 command_runner.go:130] >       "ipPref": "",
	I1202 21:09:09.027300  307731 command_runner.go:130] >       "maxConfNum": 1,
	I1202 21:09:09.027304  307731 command_runner.go:130] >       "setupSerially": false,
	I1202 21:09:09.027309  307731 command_runner.go:130] >       "useInternalLoopback": false
	I1202 21:09:09.027312  307731 command_runner.go:130] >     },
	I1202 21:09:09.027321  307731 command_runner.go:130] >     "containerd": {
	I1202 21:09:09.027325  307731 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I1202 21:09:09.027330  307731 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I1202 21:09:09.027335  307731 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false,
	I1202 21:09:09.027339  307731 command_runner.go:130] >       "runtimes": {
	I1202 21:09:09.027342  307731 command_runner.go:130] >         "runc": {
	I1202 21:09:09.027347  307731 command_runner.go:130] >           "ContainerAnnotations": null,
	I1202 21:09:09.027351  307731 command_runner.go:130] >           "PodAnnotations": null,
	I1202 21:09:09.027357  307731 command_runner.go:130] >           "baseRuntimeSpec": "",
	I1202 21:09:09.027361  307731 command_runner.go:130] >           "cgroupWritable": false,
	I1202 21:09:09.027365  307731 command_runner.go:130] >           "cniConfDir": "",
	I1202 21:09:09.027370  307731 command_runner.go:130] >           "cniMaxConfNum": 0,
	I1202 21:09:09.027374  307731 command_runner.go:130] >           "io_type": "",
	I1202 21:09:09.027378  307731 command_runner.go:130] >           "options": {
	I1202 21:09:09.027382  307731 command_runner.go:130] >             "BinaryName": "",
	I1202 21:09:09.027386  307731 command_runner.go:130] >             "CriuImagePath": "",
	I1202 21:09:09.027390  307731 command_runner.go:130] >             "CriuWorkPath": "",
	I1202 21:09:09.027394  307731 command_runner.go:130] >             "IoGid": 0,
	I1202 21:09:09.027398  307731 command_runner.go:130] >             "IoUid": 0,
	I1202 21:09:09.027402  307731 command_runner.go:130] >             "NoNewKeyring": false,
	I1202 21:09:09.027407  307731 command_runner.go:130] >             "Root": "",
	I1202 21:09:09.027411  307731 command_runner.go:130] >             "ShimCgroup": "",
	I1202 21:09:09.027415  307731 command_runner.go:130] >             "SystemdCgroup": false
	I1202 21:09:09.027418  307731 command_runner.go:130] >           },
	I1202 21:09:09.027424  307731 command_runner.go:130] >           "privileged_without_host_devices": false,
	I1202 21:09:09.027430  307731 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I1202 21:09:09.027434  307731 command_runner.go:130] >           "runtimePath": "",
	I1202 21:09:09.027440  307731 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I1202 21:09:09.027444  307731 command_runner.go:130] >           "sandboxer": "podsandbox",
	I1202 21:09:09.027451  307731 command_runner.go:130] >           "snapshotter": ""
	I1202 21:09:09.027455  307731 command_runner.go:130] >         }
	I1202 21:09:09.027458  307731 command_runner.go:130] >       }
	I1202 21:09:09.027461  307731 command_runner.go:130] >     },
	I1202 21:09:09.027470  307731 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I1202 21:09:09.027476  307731 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I1202 21:09:09.027481  307731 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I1202 21:09:09.027485  307731 command_runner.go:130] >     "disableApparmor": false,
	I1202 21:09:09.027490  307731 command_runner.go:130] >     "disableHugetlbController": true,
	I1202 21:09:09.027494  307731 command_runner.go:130] >     "disableProcMount": false,
	I1202 21:09:09.027499  307731 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I1202 21:09:09.027503  307731 command_runner.go:130] >     "enableCDI": true,
	I1202 21:09:09.027507  307731 command_runner.go:130] >     "enableSelinux": false,
	I1202 21:09:09.027511  307731 command_runner.go:130] >     "enableUnprivilegedICMP": true,
	I1202 21:09:09.027515  307731 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I1202 21:09:09.027520  307731 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I1202 21:09:09.027525  307731 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I1202 21:09:09.027529  307731 command_runner.go:130] >     "maxContainerLogLineSize": 16384,
	I1202 21:09:09.027534  307731 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I1202 21:09:09.027538  307731 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I1202 21:09:09.027544  307731 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027548  307731 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I1202 21:09:09.027554  307731 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri",
	I1202 21:09:09.027558  307731 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I1202 21:09:09.027563  307731 command_runner.go:130] >     "unsetSeccompProfile": ""
	I1202 21:09:09.027566  307731 command_runner.go:130] >   },
	I1202 21:09:09.027569  307731 command_runner.go:130] >   "features": {
	I1202 21:09:09.027574  307731 command_runner.go:130] >     "supplemental_groups_policy": true
	I1202 21:09:09.027577  307731 command_runner.go:130] >   },
	I1202 21:09:09.027581  307731 command_runner.go:130] >   "golang": "go1.24.9",
	I1202 21:09:09.027591  307731 command_runner.go:130] >   "lastCNILoadStatus": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027600  307731 command_runner.go:130] >   "lastCNILoadStatus.default": "cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config",
	I1202 21:09:09.027604  307731 command_runner.go:130] >   "runtimeHandlers": [
	I1202 21:09:09.027610  307731 command_runner.go:130] >     {
	I1202 21:09:09.027614  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027619  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027623  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027626  307731 command_runner.go:130] >       }
	I1202 21:09:09.027629  307731 command_runner.go:130] >     },
	I1202 21:09:09.027631  307731 command_runner.go:130] >     {
	I1202 21:09:09.027635  307731 command_runner.go:130] >       "features": {
	I1202 21:09:09.027639  307731 command_runner.go:130] >         "recursive_read_only_mounts": true,
	I1202 21:09:09.027644  307731 command_runner.go:130] >         "user_namespaces": true
	I1202 21:09:09.027646  307731 command_runner.go:130] >       },
	I1202 21:09:09.027650  307731 command_runner.go:130] >       "name": "runc"
	I1202 21:09:09.027653  307731 command_runner.go:130] >     }
	I1202 21:09:09.027656  307731 command_runner.go:130] >   ],
	I1202 21:09:09.027659  307731 command_runner.go:130] >   "status": {
	I1202 21:09:09.027663  307731 command_runner.go:130] >     "conditions": [
	I1202 21:09:09.027666  307731 command_runner.go:130] >       {
	I1202 21:09:09.027670  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027673  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027677  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027681  307731 command_runner.go:130] >         "type": "RuntimeReady"
	I1202 21:09:09.027685  307731 command_runner.go:130] >       },
	I1202 21:09:09.027688  307731 command_runner.go:130] >       {
	I1202 21:09:09.027694  307731 command_runner.go:130] >         "message": "Network plugin returns error: cni plugin not initialized",
	I1202 21:09:09.027699  307731 command_runner.go:130] >         "reason": "NetworkPluginNotReady",
	I1202 21:09:09.027703  307731 command_runner.go:130] >         "status": false,
	I1202 21:09:09.027707  307731 command_runner.go:130] >         "type": "NetworkReady"
	I1202 21:09:09.027710  307731 command_runner.go:130] >       },
	I1202 21:09:09.027713  307731 command_runner.go:130] >       {
	I1202 21:09:09.027718  307731 command_runner.go:130] >         "message": "",
	I1202 21:09:09.027722  307731 command_runner.go:130] >         "reason": "",
	I1202 21:09:09.027726  307731 command_runner.go:130] >         "status": true,
	I1202 21:09:09.027731  307731 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings"
	I1202 21:09:09.027737  307731 command_runner.go:130] >       }
	I1202 21:09:09.027740  307731 command_runner.go:130] >     ]
	I1202 21:09:09.027743  307731 command_runner.go:130] >   }
	I1202 21:09:09.027746  307731 command_runner.go:130] > }
	I1202 21:09:09.029686  307731 cni.go:84] Creating CNI manager for ""
	I1202 21:09:09.029710  307731 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:09:09.029745  307731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:09:09.029776  307731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:09:09.029910  307731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:09:09.029985  307731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:09:09.036886  307731 command_runner.go:130] > kubeadm
	I1202 21:09:09.036909  307731 command_runner.go:130] > kubectl
	I1202 21:09:09.036915  307731 command_runner.go:130] > kubelet
	I1202 21:09:09.037789  307731 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:09:09.037851  307731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:09:09.045467  307731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:09:09.058043  307731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:09:09.070239  307731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 21:09:09.082241  307731 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:09:09.085795  307731 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1202 21:09:09.086355  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:09.208713  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:09.542492  307731 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:09:09.542524  307731 certs.go:195] generating shared ca certs ...
	I1202 21:09:09.542541  307731 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:09.542698  307731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:09:09.542757  307731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:09:09.542770  307731 certs.go:257] generating profile certs ...
	I1202 21:09:09.542908  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:09:09.542989  307731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:09:09.543042  307731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:09:09.543063  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 21:09:09.543077  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 21:09:09.543095  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 21:09:09.543113  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 21:09:09.543136  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 21:09:09.543152  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 21:09:09.543163  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 21:09:09.543181  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 21:09:09.543248  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:09:09.543300  307731 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:09:09.543314  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:09:09.543356  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:09:09.543389  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:09:09.543418  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:09:09.543492  307731 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:09:09.543552  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.543576  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.543600  307731 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem -> /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.544214  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:09:09.562449  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:09:09.579657  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:09:09.597016  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:09:09.615077  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:09:09.633715  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:09:09.651379  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:09:09.669401  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:09:09.688777  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:09:09.706718  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:09:09.724108  307731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:09:09.741960  307731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:09:09.754915  307731 ssh_runner.go:195] Run: openssl version
	I1202 21:09:09.760531  307731 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1202 21:09:09.760935  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:09:09.769169  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772688  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.772981  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.773081  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:09:09.818276  307731 command_runner.go:130] > 3ec20f2e
	I1202 21:09:09.818787  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:09:09.826520  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:09:09.834827  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838656  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838686  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.838739  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:09:09.879212  307731 command_runner.go:130] > b5213941
	I1202 21:09:09.879657  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:09:09.887484  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:09:09.895881  307731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899623  307731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899669  307731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.899717  307731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:09:09.940074  307731 command_runner.go:130] > 51391683
	I1202 21:09:09.940525  307731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:09:09.948324  307731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951828  307731 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:09:09.951867  307731 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1202 21:09:09.951875  307731 command_runner.go:130] > Device: 259,1	Inode: 1305405     Links: 1
	I1202 21:09:09.951881  307731 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1202 21:09:09.951888  307731 command_runner.go:130] > Access: 2025-12-02 21:05:02.335914079 +0000
	I1202 21:09:09.951894  307731 command_runner.go:130] > Modify: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951898  307731 command_runner.go:130] > Change: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951903  307731 command_runner.go:130] >  Birth: 2025-12-02 21:00:57.486756379 +0000
	I1202 21:09:09.951997  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:09:09.992474  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:09.992586  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:09:10.044870  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.045432  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:09:10.090412  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.091042  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:09:10.132690  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.133145  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:09:10.173976  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.174453  307731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:09:10.215639  307731 command_runner.go:130] > Certificate will not expire
	I1202 21:09:10.216098  307731 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:09:10.216220  307731 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:09:10.216321  307731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:09:10.242158  307731 cri.go:89] found id: ""
	I1202 21:09:10.242234  307731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:09:10.249118  307731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1202 21:09:10.249140  307731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1202 21:09:10.249151  307731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1202 21:09:10.250041  307731 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:09:10.250060  307731 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:09:10.250140  307731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:09:10.257350  307731 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:09:10.257790  307731 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.257903  307731 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753958" cluster setting kubeconfig missing "functional-753958" context setting]
	I1202 21:09:10.258244  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.258662  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.258838  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.259364  307731 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 21:09:10.259381  307731 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 21:09:10.259386  307731 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 21:09:10.259392  307731 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 21:09:10.259397  307731 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 21:09:10.259441  307731 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1202 21:09:10.259684  307731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:09:10.267575  307731 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 21:09:10.267606  307731 kubeadm.go:602] duration metric: took 17.540251ms to restartPrimaryControlPlane
	I1202 21:09:10.267616  307731 kubeadm.go:403] duration metric: took 51.535685ms to StartCluster
	I1202 21:09:10.267631  307731 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.267694  307731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.268283  307731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:09:10.268485  307731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 21:09:10.268816  307731 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:09:10.268866  307731 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 21:09:10.268984  307731 addons.go:70] Setting storage-provisioner=true in profile "functional-753958"
	I1202 21:09:10.269003  307731 addons.go:239] Setting addon storage-provisioner=true in "functional-753958"
	I1202 21:09:10.269024  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.269023  307731 addons.go:70] Setting default-storageclass=true in profile "functional-753958"
	I1202 21:09:10.269176  307731 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-753958"
	I1202 21:09:10.269690  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.269905  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.274878  307731 out.go:179] * Verifying Kubernetes components...
	I1202 21:09:10.279673  307731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:09:10.309974  307731 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:09:10.310183  307731 kapi.go:59] client config for functional-753958: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 21:09:10.310507  307731 addons.go:239] Setting addon default-storageclass=true in "functional-753958"
	I1202 21:09:10.310544  307731 host.go:66] Checking if "functional-753958" exists ...
	I1202 21:09:10.311034  307731 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:09:10.322713  307731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 21:09:10.325707  307731 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.325729  307731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 21:09:10.325795  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.357829  307731 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:10.357850  307731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 21:09:10.357914  307731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:09:10.371695  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.400329  307731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:09:10.499296  307731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:09:10.516631  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:10.547824  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.294654  307731 node_ready.go:35] waiting up to 6m0s for node "functional-753958" to be "Ready" ...
	I1202 21:09:11.294774  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.294779  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.294839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.295227  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.295315  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295463  307731 retry.go:31] will retry after 210.924688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:11.295364  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.295550  307731 retry.go:31] will retry after 203.437895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.500110  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:11.506791  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:11.578640  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.581915  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.581967  307731 retry.go:31] will retry after 400.592485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595609  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:11.595676  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.595708  307731 retry.go:31] will retry after 422.737023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:11.794907  307731 type.go:168] "Request Body" body=""
	I1202 21:09:11.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:11.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:11.982828  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.018958  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.086246  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.086287  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.086307  307731 retry.go:31] will retry after 564.880189ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117100  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.117143  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.117191  307731 retry.go:31] will retry after 637.534191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.295409  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.295483  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.295805  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.652365  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:12.710471  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.710580  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.710622  307731 retry.go:31] will retry after 876.325619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.755731  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:12.795162  307731 type.go:168] "Request Body" body=""
	I1202 21:09:12.795277  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:12.795599  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:12.835060  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:12.835099  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:12.835118  307731 retry.go:31] will retry after 1.227832404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.295855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.295948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.296269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:13.296338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:13.587806  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:13.646676  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:13.646721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.646742  307731 retry.go:31] will retry after 1.443838067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:13.795158  307731 type.go:168] "Request Body" body=""
	I1202 21:09:13.795236  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:13.795586  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.064081  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:14.123819  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:14.127173  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.127215  307731 retry.go:31] will retry after 1.221247817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:14.295601  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.295675  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.295968  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:14.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:09:14.795874  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:14.796179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.091734  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:15.151479  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.151525  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.151546  307731 retry.go:31] will retry after 1.850953854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:15.349587  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:15.413525  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:15.416721  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.416752  307731 retry.go:31] will retry after 1.691274377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:15.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:09:15.795307  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:15.795621  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:15.795696  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:16.295456  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.295552  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.295874  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:16.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:09:16.795755  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:16.796091  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.003193  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:17.061077  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.064289  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.064321  307731 retry.go:31] will retry after 2.076549374s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.108496  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:17.168660  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:17.168709  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.168731  307731 retry.go:31] will retry after 3.158627903s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:17.295738  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.295812  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.296081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:17.794893  307731 type.go:168] "Request Body" body=""
	I1202 21:09:17.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:17.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:18.294955  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.295057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.295390  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:18.295447  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:18.795090  307731 type.go:168] "Request Body" body=""
	I1202 21:09:18.795156  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:18.795510  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.141123  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:19.199068  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:19.202437  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.202469  307731 retry.go:31] will retry after 2.729492901s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:19.295833  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.295905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.296241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:19.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:09:19.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:19.795344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:20.295255  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.295325  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.295687  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:20.295737  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:20.327882  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:20.391902  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:20.391939  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.391960  307731 retry.go:31] will retry after 4.367650264s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:20.795532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:20.795920  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.295837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.295923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.296260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:09:21.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:21.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:21.932718  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:21.990698  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:21.990736  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:21.990761  307731 retry.go:31] will retry after 5.196584204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:22.295359  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.295443  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.295788  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:22.295845  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:22.795464  307731 type.go:168] "Request Body" body=""
	I1202 21:09:22.795562  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:22.795917  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.295669  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.295739  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.296001  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:23.795753  307731 type.go:168] "Request Body" body=""
	I1202 21:09:23.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:23.796151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.295815  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.295890  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.296207  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:24.296265  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:24.759924  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:24.795570  307731 type.go:168] "Request Body" body=""
	I1202 21:09:24.795642  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:24.795905  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:24.817214  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:24.821374  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:24.821411  307731 retry.go:31] will retry after 3.851570628s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:25.294967  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.295041  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:25.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:09:25.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:25.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.295350  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.295431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:26.795297  307731 type.go:168] "Request Body" body=""
	I1202 21:09:26.795366  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:26.795685  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:26.795740  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:27.188447  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:27.254238  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:27.254282  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.254304  307731 retry.go:31] will retry after 6.785596085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:27.295437  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.295865  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:27.794985  307731 type.go:168] "Request Body" body=""
	I1202 21:09:27.795057  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:27.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.294999  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.295384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:28.674112  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:28.734788  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:28.734834  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.734853  307731 retry.go:31] will retry after 5.470614597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:28.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:09:28.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:28.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:29.295607  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.295683  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.296024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:29.296105  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:29.794837  307731 type.go:168] "Request Body" body=""
	I1202 21:09:29.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:29.795239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.295136  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.295232  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:30.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:30.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:30.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.295003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:31.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:09:31.795007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:31.795289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:31.795338  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:32.295580  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.295653  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.295944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:32.795804  307731 type.go:168] "Request Body" body=""
	I1202 21:09:32.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:32.796241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.294972  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.295049  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:33.794828  307731 type.go:168] "Request Body" body=""
	I1202 21:09:33.794899  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:33.795152  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:34.040709  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:34.103827  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.103870  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.103890  307731 retry.go:31] will retry after 13.233422448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.206146  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:34.265937  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:34.265992  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.266011  307731 retry.go:31] will retry after 9.178751123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:34.295270  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.295377  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.295751  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:34.295808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:34.795590  307731 type.go:168] "Request Body" body=""
	I1202 21:09:34.795669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:34.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.295449  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.295792  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:35.795609  307731 type.go:168] "Request Body" body=""
	I1202 21:09:35.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:35.795985  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.295235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:36.795205  307731 type.go:168] "Request Body" body=""
	I1202 21:09:36.795285  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:36.795563  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:36.795617  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:37.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:37.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:09:37.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:37.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.294875  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.295216  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:38.794960  307731 type.go:168] "Request Body" body=""
	I1202 21:09:38.795035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:38.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:39.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.295474  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:39.295528  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:39.795755  307731 type.go:168] "Request Body" body=""
	I1202 21:09:39.795827  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:39.796097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.295759  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.295831  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.296122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:40.794848  307731 type.go:168] "Request Body" body=""
	I1202 21:09:40.794921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:40.795244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.294965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.295255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:41.794953  307731 type.go:168] "Request Body" body=""
	I1202 21:09:41.795034  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:41.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:41.795415  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:42.295127  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.295208  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.295661  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:42.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:09:42.795105  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:42.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.294952  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.295026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:43.445783  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:09:43.508150  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:43.508187  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.508208  307731 retry.go:31] will retry after 18.255533178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:43.795638  307731 type.go:168] "Request Body" body=""
	I1202 21:09:43.795730  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:43.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:43.796132  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:44.295329  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.295407  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.295673  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:44.795488  307731 type.go:168] "Request Body" body=""
	I1202 21:09:44.795564  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:44.795884  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.295740  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.295822  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:45.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:09:45.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:45.795177  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:46.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.295418  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:46.295474  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:46.795131  307731 type.go:168] "Request Body" body=""
	I1202 21:09:46.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:46.795532  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.295250  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.295339  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.295611  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:47.337905  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:09:47.398412  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:09:47.398459  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.398478  307731 retry.go:31] will retry after 28.802230035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:09:47.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:09:47.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:47.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:48.794980  307731 type.go:168] "Request Body" body=""
	I1202 21:09:48.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:48.795304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:48.795347  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:49.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.295302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:49.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:09:49.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:49.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.295374  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.295672  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:50.795457  307731 type.go:168] "Request Body" body=""
	I1202 21:09:50.795527  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:50.795850  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:50.795908  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:51.295904  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.295977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:51.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:09:51.794969  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:51.795305  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.295341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:52.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:09:52.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:52.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:53.295540  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.295885  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:53.295930  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:53.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:09:53.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:53.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.295722  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.296147  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:54.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:09:54.795496  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:54.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:55.295532  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.295606  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:55.295984  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:55.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:09:55.795835  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:55.796153  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.294998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.295324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:56.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:09:56.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:56.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.295124  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.295200  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.295537  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:57.795227  307731 type.go:168] "Request Body" body=""
	I1202 21:09:57.795291  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:57.795605  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:09:57.795689  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:09:58.295413  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.295489  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.295818  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:58.795617  307731 type.go:168] "Request Body" body=""
	I1202 21:09:58.795690  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:58.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.295296  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.295368  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.295623  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:09:59.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:09:59.794983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:09:59.795300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:00.295306  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.295398  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.295706  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:00.295756  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:00.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:10:00.795832  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:00.796237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.295009  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.295081  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.295430  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.763971  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:01.794859  307731 type.go:168] "Request Body" body=""
	I1202 21:10:01.794929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:01.795196  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:01.835916  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:01.839908  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:01.839940  307731 retry.go:31] will retry after 30.677466671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:02.295717  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.295826  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.296209  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:02.296289  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:02.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:10:02.795054  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:02.795406  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.295176  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.295453  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:03.794940  307731 type.go:168] "Request Body" body=""
	I1202 21:10:03.795026  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:03.795356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.295114  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.295196  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.295536  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:04.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:04.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:04.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:04.796171  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:05.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.294934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.295264  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:05.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:10:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:05.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.295514  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.295601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.295881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:06.795664  307731 type.go:168] "Request Body" body=""
	I1202 21:10:06.795741  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:06.796081  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:07.294800  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.294876  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:07.295261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:07.795446  307731 type.go:168] "Request Body" body=""
	I1202 21:10:07.795518  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:07.795780  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.295543  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.295618  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.295937  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:08.795803  307731 type.go:168] "Request Body" body=""
	I1202 21:10:08.795884  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:08.796321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:09.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:09.295304  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:09.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:09.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:09.795434  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.295294  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.295369  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.295705  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:10.795493  307731 type.go:168] "Request Body" body=""
	I1202 21:10:10.795577  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:10.795953  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:11.295781  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.295870  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:11.296268  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:11.794950  307731 type.go:168] "Request Body" body=""
	I1202 21:10:11.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:11.795368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.295128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:12.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:12.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:12.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.294954  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:13.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:13.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:13.795197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:13.795238  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:14.294945  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.295037  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.295425  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:14.795147  307731 type.go:168] "Request Body" body=""
	I1202 21:10:14.795224  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:14.795562  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.295257  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.295338  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.295612  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:15.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:15.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:15.795380  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:16.200937  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:16.256562  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:16.259927  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.259959  307731 retry.go:31] will retry after 18.923209073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:16.295107  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.295189  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.295558  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:16.794811  307731 type.go:168] "Request Body" body=""
	I1202 21:10:16.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:16.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.295260  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:17.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:10:17.795031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:17.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:18.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.295206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:18.295258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:18.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:10:18.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:18.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.295370  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:19.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:10:19.794970  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:19.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:20.295271  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.295345  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.295682  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:20.295746  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:20.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:10:20.795586  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:20.795908  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.295384  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.295457  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.295714  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:21.795556  307731 type.go:168] "Request Body" body=""
	I1202 21:10:21.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:21.795949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:22.295726  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.295802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.296133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:22.296198  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:22.795455  307731 type.go:168] "Request Body" body=""
	I1202 21:10:22.795537  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:22.795801  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.295679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.296049  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:23.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:10:23.795807  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:23.796143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.294826  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.294902  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.295188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:24.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:10:24.795928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:24.796234  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:24.796284  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:25.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.294948  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:25.794855  307731 type.go:168] "Request Body" body=""
	I1202 21:10:25.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:25.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:26.795034  307731 type.go:168] "Request Body" body=""
	I1202 21:10:26.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:26.795438  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:27.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:27.295395  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:27.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:10:27.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:27.795348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:10:28.794984  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:28.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:29.294923  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:29.295418  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:29.795094  307731 type.go:168] "Request Body" body=""
	I1202 21:10:29.795169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:29.795504  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.295472  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.295550  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.295841  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:30.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:10:30.795750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:30.796084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.294839  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.295203  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:31.794808  307731 type.go:168] "Request Body" body=""
	I1202 21:10:31.794881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:31.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:31.795189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:32.294872  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:32.517612  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 21:10:32.588466  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591823  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:32.591933  307731 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:10:32.795459  307731 type.go:168] "Request Body" body=""
	I1202 21:10:32.795532  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:32.795852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.295131  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.295202  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:33.794891  307731 type.go:168] "Request Body" body=""
	I1202 21:10:33.794962  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:33.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:33.795314  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:34.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:34.795003  307731 type.go:168] "Request Body" body=""
	I1202 21:10:34.795074  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:34.795374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.183965  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:10:35.239016  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:10:35.242188  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.242221  307731 retry.go:31] will retry after 25.961571555s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 21:10:35.295555  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.295639  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.295975  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:35.795775  307731 type.go:168] "Request Body" body=""
	I1202 21:10:35.795845  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:35.796134  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:35.796175  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:36.295019  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.295091  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.295347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:36.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:36.795019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:36.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.295060  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.295466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:37.795743  307731 type.go:168] "Request Body" body=""
	I1202 21:10:37.795817  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:37.796071  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:38.295875  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.295951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.296303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:38.296363  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:38.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:10:38.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:38.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.295633  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.295705  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:39.795820  307731 type.go:168] "Request Body" body=""
	I1202 21:10:39.795894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:39.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:40.295848  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.295936  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.296337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:40.296429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:40.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:10:40.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:40.795169  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.294994  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:41.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:10:41.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:41.795377  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.295192  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:42.795194  307731 type.go:168] "Request Body" body=""
	I1202 21:10:42.795316  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:42.795641  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:42.795694  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:43.295520  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.295594  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:43.795644  307731 type.go:168] "Request Body" body=""
	I1202 21:10:43.795714  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:43.795981  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.295768  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.295846  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.296173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:44.794885  307731 type.go:168] "Request Body" body=""
	I1202 21:10:44.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:44.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:45.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:45.295340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:45.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:10:45.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:45.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.295077  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.295153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:46.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:10:46.795257  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:46.795513  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:47.794962  307731 type.go:168] "Request Body" body=""
	I1202 21:10:47.795042  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:47.795380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:47.795437  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:48.295512  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.295579  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.295842  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:48.795623  307731 type.go:168] "Request Body" body=""
	I1202 21:10:48.795698  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:48.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.295731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.296154  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:49.795443  307731 type.go:168] "Request Body" body=""
	I1202 21:10:49.795545  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:49.795873  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:49.795941  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:50.295652  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.295726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.296078  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:50.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:10:50.795808  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:50.796159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.295466  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.295534  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.295787  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:51.795602  307731 type.go:168] "Request Body" body=""
	I1202 21:10:51.795679  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:51.796007  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:51.796073  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:52.295850  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.296267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:52.794970  307731 type.go:168] "Request Body" body=""
	I1202 21:10:52.795045  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:52.795299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.294905  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:53.794897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:53.794971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:53.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:54.295102  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.295441  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:54.295539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:54.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:10:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:54.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.295052  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.295132  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.295482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:55.795785  307731 type.go:168] "Request Body" body=""
	I1202 21:10:55.795851  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:55.796131  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.295386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:56.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:10:56.795230  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:56.795573  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:56.795626  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:57.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.294906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.295200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:57.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:10:57.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:57.795292  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.294897  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.294972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.295313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:58.795026  307731 type.go:168] "Request Body" body=""
	I1202 21:10:58.795092  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:58.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:10:59.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.295353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:10:59.295412  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:10:59.795027  307731 type.go:168] "Request Body" body=""
	I1202 21:10:59.795102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:10:59.795393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.298236  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.298341  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.298735  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:00.795120  307731 type.go:168] "Request Body" body=""
	I1202 21:11:00.795194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:00.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:01.204061  307731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 21:11:01.267039  307731 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267090  307731 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 21:11:01.267174  307731 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 21:11:01.270170  307731 out.go:179] * Enabled addons: 
	I1202 21:11:01.273921  307731 addons.go:530] duration metric: took 1m51.005043213s for enable addons: enabled=[]
	I1202 21:11:01.295263  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.295359  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.295653  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:01.295706  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:01.795541  307731 type.go:168] "Request Body" body=""
	I1202 21:11:01.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:01.795971  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.295791  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.295861  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.296199  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:02.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:11:02.795033  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:02.795359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.294886  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:03.795031  307731 type.go:168] "Request Body" body=""
	I1202 21:11:03.795108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:03.795398  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:03.795445  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:04.295135  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.295207  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.295489  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:04.797772  307731 type.go:168] "Request Body" body=""
	I1202 21:11:04.797855  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:04.798166  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.295295  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:05.795024  307731 type.go:168] "Request Body" body=""
	I1202 21:11:05.795114  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:05.795840  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:05.795891  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:06.295375  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.295699  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:06.795562  307731 type.go:168] "Request Body" body=""
	I1202 21:11:06.795637  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:06.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.295853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:07.795390  307731 type.go:168] "Request Body" body=""
	I1202 21:11:07.795462  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:07.795723  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:08.295538  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.295622  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.295961  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:08.296019  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:08.795765  307731 type.go:168] "Request Body" body=""
	I1202 21:11:08.795839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:08.796212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.295353  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.295424  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.295732  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:09.795220  307731 type.go:168] "Request Body" body=""
	I1202 21:11:09.795301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:09.795760  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:10.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.295830  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:10.296275  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:10.794847  307731 type.go:168] "Request Body" body=""
	I1202 21:11:10.794927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:10.795204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.295063  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.295142  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.295478  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:11.795187  307731 type.go:168] "Request Body" body=""
	I1202 21:11:11.795260  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:11.795582  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.294899  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.295257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:11:12.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:12.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:12.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:13.295067  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.295150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.295484  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:13.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:13.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:13.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:14.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:11:14.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:14.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:15.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.295024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.295277  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:15.295317  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:15.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:11:15.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:15.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.295351  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:16.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:11:16.794897  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:16.795171  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.294881  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.295258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:17.794911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:17.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:17.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:17.795384  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:18.294837  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.295270  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:18.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:11:18.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:18.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.295034  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.295446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:19.795123  307731 type.go:168] "Request Body" body=""
	I1202 21:11:19.795197  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:19.795502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:19.795550  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:20.295498  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.295582  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.295890  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:20.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:11:20.795745  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:20.796070  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.294797  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.294862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.295106  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:21.795856  307731 type.go:168] "Request Body" body=""
	I1202 21:11:21.795927  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:21.796206  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:21.796258  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.295002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.295336  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:22.795444  307731 type.go:168] "Request Body" body=""
	I1202 21:11:22.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:22.795821  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.295637  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.295716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.296030  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:23.795816  307731 type.go:168] "Request Body" body=""
	I1202 21:11:23.795911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:23.796220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:24.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.295038  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.295400  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:24.295449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:24.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:11:24.795056  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:24.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.294949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.295023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.295327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:25.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:25.795726  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:25.795991  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.294874  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.295297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:26.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:11:26.795064  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:26.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:26.795449  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:27.295101  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.295170  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.295451  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:27.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:11:27.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:27.795354  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.294927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:28.795573  307731 type.go:168] "Request Body" body=""
	I1202 21:11:28.795646  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:28.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:28.795938  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:29.295736  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.296135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:29.794877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:29.794966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:29.795325  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.295097  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.295169  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:30.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:11:30.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:30.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:31.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:31.295398  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:31.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:11:31.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:31.795188  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.294898  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:32.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:11:32.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:32.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.295108  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:33.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:33.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:33.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:33.795429  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:34.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.294989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.295322  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:34.795011  307731 type.go:168] "Request Body" body=""
	I1202 21:11:34.795087  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:34.795342  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.294937  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.295337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:35.795066  307731 type.go:168] "Request Body" body=""
	I1202 21:11:35.795146  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:35.795473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:35.795529  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:36.295315  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.295394  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.295654  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:36.795469  307731 type.go:168] "Request Body" body=""
	I1202 21:11:36.795546  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:36.795881  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.295777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.296183  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:37.795356  307731 type.go:168] "Request Body" body=""
	I1202 21:11:37.795431  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:37.795698  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:37.795750  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:38.295449  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.295517  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:38.795650  307731 type.go:168] "Request Body" body=""
	I1202 21:11:38.795731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:38.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.295366  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.295436  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.295758  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:39.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:11:39.795668  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:39.795998  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:39.796055  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:40.294852  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.295284  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:40.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:11:40.794934  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:40.795237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.295084  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.295163  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.295481  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:41.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:11:41.795005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:41.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:42.295575  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.295656  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.295978  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:42.296030  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:42.795792  307731 type.go:168] "Request Body" body=""
	I1202 21:11:42.795869  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:42.796202  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.295844  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.295922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.296257  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:43.795435  307731 type.go:168] "Request Body" body=""
	I1202 21:11:43.795509  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:43.795804  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:44.295603  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.295700  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.296029  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:44.296112  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:44.794813  307731 type.go:168] "Request Body" body=""
	I1202 21:11:44.794887  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.294944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.295025  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.295309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:45.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:11:45.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:45.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.295180  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.295255  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.295594  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:46.795733  307731 type.go:168] "Request Body" body=""
	I1202 21:11:46.795806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:46.796075  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:46.796126  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:47.294799  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.294879  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.295242  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:47.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:11:47.794920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:47.795217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.295217  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:48.794947  307731 type.go:168] "Request Body" body=""
	I1202 21:11:48.795020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:48.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:49.294951  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.295028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:49.295407  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:49.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:11:49.795752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:49.796093  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.295777  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.295858  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.296181  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:50.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:11:50.795022  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:50.795327  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.294892  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.294961  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:51.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:11:51.795028  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:51.795369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:51.795425  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:52.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.295183  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.295500  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:52.795678  307731 type.go:168] "Request Body" body=""
	I1202 21:11:52.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:52.796004  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.295812  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.295892  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.296208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:53.795862  307731 type.go:168] "Request Body" body=""
	I1202 21:11:53.795942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:53.796296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:53.796344  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:54.294832  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.295145  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:54.794887  307731 type.go:168] "Request Body" body=""
	I1202 21:11:54.794967  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:54.795291  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.294921  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.294995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.295281  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:55.795485  307731 type.go:168] "Request Body" body=""
	I1202 21:11:55.795558  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:55.795809  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:56.295721  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.295797  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.296098  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:56.296148  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:56.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:11:56.794917  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:56.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.295605  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.295673  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.295938  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:57.795732  307731 type.go:168] "Request Body" body=""
	I1202 21:11:57.795802  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:57.796121  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.294836  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.294913  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.295263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:58.794825  307731 type.go:168] "Request Body" body=""
	I1202 21:11:58.794896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:58.795143  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:11:58.795190  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:11:59.294877  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.295267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:11:59.794990  307731 type.go:168] "Request Body" body=""
	I1202 21:11:59.795067  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:11:59.795410  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.308704  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.308789  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.309104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:00.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:00.794956  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:00.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:00.795332  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:01.294968  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.295063  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.295473  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:01.794941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:01.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:01.795373  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.294948  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.295043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:02.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:02.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:02.795388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:02.795477  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:03.295203  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.295281  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.295626  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:03.795430  307731 type.go:168] "Request Body" body=""
	I1202 21:12:03.795507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:03.795802  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.295252  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.295319  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.295618  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:04.795525  307731 type.go:168] "Request Body" body=""
	I1202 21:12:04.795601  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:04.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:04.796063  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:05.295831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.295911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.296220  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:05.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:05.794932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:05.795240  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.294953  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.295030  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.295362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:06.795000  307731 type.go:168] "Request Body" body=""
	I1202 21:12:06.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:06.795417  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:07.294864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.295204  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:07.295255  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:07.794954  307731 type.go:168] "Request Body" body=""
	I1202 21:12:07.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:07.795343  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.295005  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:08.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:12:08.795520  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:08.795777  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:09.295573  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.295651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.295959  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:09.296007  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:09.795632  307731 type.go:168] "Request Body" body=""
	I1202 21:12:09.795716  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:09.796054  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.295748  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.295818  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:10.795857  307731 type.go:168] "Request Body" body=""
	I1202 21:12:10.795938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:10.796244  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.294935  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.295019  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:11.794937  307731 type.go:168] "Request Body" body=""
	I1202 21:12:11.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:11.795294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:11.795346  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:12.294901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.294985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:12.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:12.794981  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:12.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.294853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.294921  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.295173  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:13.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:12:13.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:13.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:13.795372  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:14.294922  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:14.795456  307731 type.go:168] "Request Body" body=""
	I1202 21:12:14.795525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:14.795875  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.295665  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.295750  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.296104  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:15.794838  307731 type.go:168] "Request Body" body=""
	I1202 21:12:15.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:15.795269  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:16.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.294937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.295192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:16.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:16.794945  307731 type.go:168] "Request Body" body=""
	I1202 21:12:16.795044  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:16.795385  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.295380  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:17.795675  307731 type.go:168] "Request Body" body=""
	I1202 21:12:17.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:17.796052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:18.295868  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.295939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.296239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:18.296278  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:18.794845  307731 type.go:168] "Request Body" body=""
	I1202 21:12:18.794946  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:18.795296  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.295053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:19.794942  307731 type.go:168] "Request Body" body=""
	I1202 21:12:19.795018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:19.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.295100  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.295467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:20.795156  307731 type.go:168] "Request Body" body=""
	I1202 21:12:20.795229  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:20.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:20.795539  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:21.295446  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.295525  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.295851  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:21.795679  307731 type.go:168] "Request Body" body=""
	I1202 21:12:21.795753  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:21.796086  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.295843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.295924  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.296179  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:22.794901  307731 type.go:168] "Request Body" body=""
	I1202 21:12:22.794980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:22.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:23.294961  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.295032  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.295330  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:23.295374  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:23.795510  307731 type.go:168] "Request Body" body=""
	I1202 21:12:23.795578  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:23.795879  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.295535  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.295609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.295949  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:24.795766  307731 type.go:168] "Request Body" body=""
	I1202 21:12:24.795904  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:24.796239  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.294814  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.294881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.295138  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:25.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:12:25.794905  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:25.795241  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:25.795296  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:26.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:26.795516  307731 type.go:168] "Request Body" body=""
	I1202 21:12:26.795596  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:26.795868  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.295674  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.295752  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.296076  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:27.795851  307731 type.go:168] "Request Body" body=""
	I1202 21:12:27.795930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:27.796225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:27.796269  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:28.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.295262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:28.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:12:28.794974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:28.795267  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.294980  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.295344  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:29.795043  307731 type.go:168] "Request Body" body=""
	I1202 21:12:29.795116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:29.795431  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:30.295491  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.295854  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:30.295900  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:30.795622  307731 type.go:168] "Request Body" body=""
	I1202 21:12:30.795701  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:30.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.295224  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:31.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:31.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:31.795329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.295021  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.295094  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.295426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:32.794863  307731 type.go:168] "Request Body" body=""
	I1202 21:12:32.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:32.795261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:32.795313  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:33.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.294988  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.295274  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:33.794936  307731 type.go:168] "Request Body" body=""
	I1202 21:12:33.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:33.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.294978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.295357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:34.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:12:34.795001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:34.795318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:34.795369  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:35.294958  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.295031  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:35.794864  307731 type.go:168] "Request Body" body=""
	I1202 21:12:35.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:35.795262  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.294964  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.295312  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:36.794952  307731 type.go:168] "Request Body" body=""
	I1202 21:12:36.795029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:36.795319  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:37.294854  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.294923  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:37.295279  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:37.794989  307731 type.go:168] "Request Body" body=""
	I1202 21:12:37.795062  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:37.795394  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.295093  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.295216  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.295502  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:38.795751  307731 type.go:168] "Request Body" body=""
	I1202 21:12:38.795829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:38.796083  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.294834  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.294907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.295189  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:39.794951  307731 type.go:168] "Request Body" body=""
	I1202 21:12:39.795024  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:39.795313  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:39.795360  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:40.295208  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.295278  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.295547  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:40.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:12:40.794993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:40.795303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.295146  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.295226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.295541  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:41.795842  307731 type.go:168] "Request Body" body=""
	I1202 21:12:41.795912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:41.796200  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:41.796251  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:42.294960  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.295046  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.295487  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:42.795065  307731 type.go:168] "Request Body" body=""
	I1202 21:12:42.795138  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:42.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.295783  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.296159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:43.794833  307731 type.go:168] "Request Body" body=""
	I1202 21:12:43.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:43.795259  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:44.294965  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.295055  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.295393  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:44.295450  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:44.794874  307731 type.go:168] "Request Body" body=""
	I1202 21:12:44.794942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:44.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.295105  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.295214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.295767  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:45.795233  307731 type.go:168] "Request Body" body=""
	I1202 21:12:45.795311  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:45.795638  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:46.295163  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.295245  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.295588  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:46.295650  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:46.794939  307731 type.go:168] "Request Body" body=""
	I1202 21:12:46.795010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:46.795360  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.295090  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.295174  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.295497  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:47.794869  307731 type.go:168] "Request Body" body=""
	I1202 21:12:47.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:47.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.294861  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.295271  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:48.794853  307731 type.go:168] "Request Body" body=""
	I1202 21:12:48.794940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:48.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:48.795342  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:49.294843  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.294911  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.295164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:49.794846  307731 type.go:168] "Request Body" body=""
	I1202 21:12:49.794949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:49.795276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.294983  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.295060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.295363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:50.795576  307731 type.go:168] "Request Body" body=""
	I1202 21:12:50.795648  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:50.795900  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:50.795939  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:51.295852  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.295925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.296265  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:51.794918  307731 type.go:168] "Request Body" body=""
	I1202 21:12:51.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:51.795350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.294960  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.295280  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:52.794928  307731 type.go:168] "Request Body" body=""
	I1202 21:12:52.795027  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:52.795353  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:53.295047  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.295126  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.295420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:53.295466  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:53.794824  307731 type.go:168] "Request Body" body=""
	I1202 21:12:53.794894  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:53.795146  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.294887  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.294966  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.295276  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:54.794978  307731 type.go:168] "Request Body" body=""
	I1202 21:12:54.795118  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:54.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.294835  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.294908  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.295159  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:55.794871  307731 type.go:168] "Request Body" body=""
	I1202 21:12:55.794955  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:55.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:55.795414  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:56.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.295014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.295303  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:56.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:12:56.794965  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:56.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.294970  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.295048  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.295340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:57.794944  307731 type.go:168] "Request Body" body=""
	I1202 21:12:57.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:57.795337  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:58.294803  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.294871  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.295161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:12:58.295224  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:12:58.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:12:58.795009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:58.795298  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.295027  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.295104  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.295440  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:12:59.794850  307731 type.go:168] "Request Body" body=""
	I1202 21:12:59.794922  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:12:59.795190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:00.295830  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.295907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.296237  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:00.296286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:00.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:00.795320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.294943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.295254  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:01.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:13:01.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:01.795411  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.295348  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:02.795414  307731 type.go:168] "Request Body" body=""
	I1202 21:13:02.795493  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:02.795754  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:02.795808  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:03.295626  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.295706  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.296056  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:03.795867  307731 type.go:168] "Request Body" body=""
	I1202 21:13:03.795947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:03.796294  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.294876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.294954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.295212  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:04.794889  307731 type.go:168] "Request Body" body=""
	I1202 21:13:04.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:04.795297  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:05.295036  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.295111  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.295416  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:05.295461  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:05.795108  307731 type.go:168] "Request Body" body=""
	I1202 21:13:05.795173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.295448  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.295528  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.296185  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:06.794905  307731 type.go:168] "Request Body" body=""
	I1202 21:13:06.794985  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:06.795346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:07.295651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.295719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.296051  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:07.296110  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:07.795853  307731 type.go:168] "Request Body" body=""
	I1202 21:13:07.795926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:07.796263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.294869  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.294949  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:08.795548  307731 type.go:168] "Request Body" body=""
	I1202 21:13:08.795627  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:08.795895  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:09.295682  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.295756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.296097  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:09.296151  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:09.794843  307731 type.go:168] "Request Body" body=""
	I1202 21:13:09.794918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:09.795258  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.295332  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.295413  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.295727  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:10.795553  307731 type.go:168] "Request Body" body=""
	I1202 21:13:10.795634  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:10.796008  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:11.295865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.295935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.296253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:11.296301  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:11.795670  307731 type.go:168] "Request Body" body=""
	I1202 21:13:11.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:11.796123  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.295307  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:12.795041  307731 type.go:168] "Request Body" body=""
	I1202 21:13:12.795119  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:12.795456  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.295695  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.295760  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.296010  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:13.795731  307731 type.go:168] "Request Body" body=""
	I1202 21:13:13.795805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:13.796135  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:13.796187  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:14.294883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.294963  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.295317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:14.795004  307731 type.go:168] "Request Body" body=""
	I1202 21:13:14.795086  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:14.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:15.794965  307731 type.go:168] "Request Body" body=""
	I1202 21:13:15.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:15.795420  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:16.294820  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.294896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.295225  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:16.295299  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:16.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:16.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:16.795324  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:17.795483  307731 type.go:168] "Request Body" body=""
	I1202 21:13:17.795554  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:17.795826  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:18.295595  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.295669  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.296052  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:18.296108  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:18.795725  307731 type.go:168] "Request Body" body=""
	I1202 21:13:18.795799  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:18.796125  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.295390  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.295507  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.295770  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:19.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:19.795613  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:19.795944  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:20.295747  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.295849  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.296214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:20.296270  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:20.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:13:20.795609  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:20.795888  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.295858  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.295932  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.296299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:21.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:21.795128  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:21.795467  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.295167  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.295249  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.295517  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:22.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:22.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:22.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:22.795386  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:23.294912  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.295388  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:23.795649  307731 type.go:168] "Request Body" body=""
	I1202 21:13:23.795757  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:23.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.295857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.295930  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.296228  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:24.794835  307731 type.go:168] "Request Body" body=""
	I1202 21:13:24.794907  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:24.795214  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:25.294914  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.295261  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:25.295309  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:25.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:13:25.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:25.795364  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.295345  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:26.795052  307731 type.go:168] "Request Body" body=""
	I1202 21:13:26.795129  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:26.795387  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:27.294928  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.295010  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:27.295406  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:27.795057  307731 type.go:168] "Request Body" body=""
	I1202 21:13:27.795135  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:27.795446  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.294926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.295180  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:28.795601  307731 type.go:168] "Request Body" body=""
	I1202 21:13:28.795676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:28.796027  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:29.295836  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.295912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.296231  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:29.296292  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:29.794829  307731 type.go:168] "Request Body" body=""
	I1202 21:13:29.794900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:29.795151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.295730  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.295806  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.296126  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:30.794839  307731 type.go:168] "Request Body" body=""
	I1202 21:13:30.794915  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:30.795249  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:31.297776  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.297853  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.298178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:31.298228  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:31.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:31.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:31.795332  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.295102  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.295433  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:32.795735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:32.795800  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:32.796165  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.294952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.295304  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:33.794929  307731 type.go:168] "Request Body" body=""
	I1202 21:13:33.795016  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:33.795321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:33.795370  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:34.294855  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.295184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:34.794873  307731 type.go:168] "Request Body" body=""
	I1202 21:13:34.794945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:34.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.294879  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.294959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:35.794857  307731 type.go:168] "Request Body" body=""
	I1202 21:13:35.794925  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:35.795178  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:36.294917  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.294991  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.295321  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:36.295373  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:36.795049  307731 type.go:168] "Request Body" body=""
	I1202 21:13:36.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:36.795475  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.295735  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.295805  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.296066  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:37.795800  307731 type.go:168] "Request Body" body=""
	I1202 21:13:37.795873  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:37.796213  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:38.295712  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.295790  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.296136  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:38.296189  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:38.794842  307731 type.go:168] "Request Body" body=""
	I1202 21:13:38.794912  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:38.795163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.294845  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.294918  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.295253  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:39.794921  307731 type.go:168] "Request Body" body=""
	I1202 21:13:39.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:39.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.295048  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.295117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:40.794900  307731 type.go:168] "Request Body" body=""
	I1202 21:13:40.794977  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:40.795334  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:40.795390  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:41.294896  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.294974  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.295282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:41.794943  307731 type.go:168] "Request Body" body=""
	I1202 21:13:41.795060  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:41.795375  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.295106  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.295194  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:42.794935  307731 type.go:168] "Request Body" body=""
	I1202 21:13:42.795013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:42.795335  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:43.294846  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.294916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.295163  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:43.295211  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:43.794883  307731 type.go:168] "Request Body" body=""
	I1202 21:13:43.794959  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:43.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.294913  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.295365  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:44.795651  307731 type.go:168] "Request Body" body=""
	I1202 21:13:44.795720  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:44.795982  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:45.295757  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.295838  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.296285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:45.296345  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:45.795035  307731 type.go:168] "Request Body" body=""
	I1202 21:13:45.795117  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:45.795459  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.295269  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.295336  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.295589  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:46.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:13:46.795000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:46.795333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.294920  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.295001  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.295346  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:47.794865  307731 type.go:168] "Request Body" body=""
	I1202 21:13:47.794939  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:47.795193  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:47.795233  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:48.294918  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.295004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:48.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:13:48.795165  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:48.795501  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.295207  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.295288  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.295554  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:49.795252  307731 type.go:168] "Request Body" body=""
	I1202 21:13:49.795322  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:49.795632  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:49.795684  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:50.295531  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.295604  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.295957  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:50.795535  307731 type.go:168] "Request Body" body=""
	I1202 21:13:50.795608  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:50.796073  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.295825  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.295900  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.296243  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:51.794922  307731 type.go:168] "Request Body" body=""
	I1202 21:13:51.794998  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:51.795338  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:52.294871  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.294945  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.295299  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:52.295371  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:52.794884  307731 type.go:168] "Request Body" body=""
	I1202 21:13:52.794958  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:52.795306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.295056  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.295127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.295442  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:53.794868  307731 type.go:168] "Request Body" body=""
	I1202 21:13:53.794943  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:53.795222  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.294840  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.294920  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.295301  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:54.794903  307731 type.go:168] "Request Body" body=""
	I1202 21:13:54.794979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:54.795316  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:54.795366  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:55.295563  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.295904  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:55.795697  307731 type.go:168] "Request Body" body=""
	I1202 21:13:55.795777  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:55.796113  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.294907  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.294993  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:56.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:13:56.794944  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:56.795238  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:57.294956  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.295035  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.295369  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:57.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:13:57.794933  307731 type.go:168] "Request Body" body=""
	I1202 21:13:57.795051  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:57.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.294806  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.294875  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.295122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:58.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:13:58.794910  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:58.795229  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.295009  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.295361  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:13:59.795054  307731 type.go:168] "Request Body" body=""
	I1202 21:13:59.795127  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:13:59.795386  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:13:59.795436  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:00.295787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.295877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.296197  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:00.794932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:00.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:00.795357  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.294857  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.295226  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:01.795016  307731 type.go:168] "Request Body" body=""
	I1202 21:14:01.795098  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:01.795437  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:01.795499  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:02.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.295017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.295413  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:02.795680  307731 type.go:168] "Request Body" body=""
	I1202 21:14:02.795756  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:02.796018  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.295834  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.295906  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.296221  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:03.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:03.795002  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:03.795347  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:04.295602  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.295676  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.296005  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:04.296061  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:04.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:04.795015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:04.795363  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.295123  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.295457  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:05.795144  307731 type.go:168] "Request Body" body=""
	I1202 21:14:05.795214  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:05.795466  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.295374  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.295448  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.295743  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:06.794927  307731 type.go:168] "Request Body" body=""
	I1202 21:14:06.795004  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:06.795340  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:06.795401  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:07.295611  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.295678  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.295927  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:07.795672  307731 type.go:168] "Request Body" body=""
	I1202 21:14:07.795746  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:07.796102  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.295447  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.295523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.295852  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:08.795225  307731 type.go:168] "Request Body" body=""
	I1202 21:14:08.795296  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:08.795548  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:08.795589  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:09.294939  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.295329  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:09.794931  307731 type.go:168] "Request Body" body=""
	I1202 21:14:09.795014  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:09.795372  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.295213  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.295283  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.295555  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:10.794913  307731 type.go:168] "Request Body" body=""
	I1202 21:14:10.794989  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:10.795326  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:11.294894  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.294973  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.295333  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:11.295391  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:11.794858  307731 type.go:168] "Request Body" body=""
	I1202 21:14:11.794926  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:11.795184  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.294866  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.294951  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:12.794998  307731 type.go:168] "Request Body" body=""
	I1202 21:14:12.795075  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:12.795409  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:13.295664  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.295731  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.295992  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:13.296034  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:13.795754  307731 type.go:168] "Request Body" body=""
	I1202 21:14:13.795825  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:13.796122  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.294862  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.294938  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.295285  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:14.795586  307731 type.go:168] "Request Body" body=""
	I1202 21:14:14.795651  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:14.795954  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:15.295756  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.295834  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.296219  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:15.296293  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:15.794916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:15.794990  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:15.795328  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.294867  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.294940  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.295275  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:16.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:16.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:16.795371  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.295083  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.295168  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.295533  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:17.795809  307731 type.go:168] "Request Body" body=""
	I1202 21:14:17.795877  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:17.796133  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:17.796172  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:18.294860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.294933  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:18.794860  307731 type.go:168] "Request Body" body=""
	I1202 21:14:18.794937  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:18.795278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.294969  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.295036  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.295289  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:19.794925  307731 type.go:168] "Request Body" body=""
	I1202 21:14:19.795003  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:19.795302  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:20.295739  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.295816  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.296151  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:20.296213  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:20.795440  307731 type.go:168] "Request Body" body=""
	I1202 21:14:20.795511  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:20.795763  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.295657  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.295765  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.296103  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:21.795787  307731 type.go:168] "Request Body" body=""
	I1202 21:14:21.795862  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:21.796230  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.294911  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.294978  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:22.794926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:22.794999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:22.795311  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:22.795368  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:23.294926  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.295000  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.295323  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:23.795654  307731 type.go:168] "Request Body" body=""
	I1202 21:14:23.795728  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:23.795993  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.295761  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.295839  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.296161  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:24.794909  307731 type.go:168] "Request Body" body=""
	I1202 21:14:24.794986  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:24.795310  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:25.294859  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.294935  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.295190  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:25.295232  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:25.794923  307731 type.go:168] "Request Body" body=""
	I1202 21:14:25.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:25.795341  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.294936  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.295020  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.295383  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:26.795713  307731 type.go:168] "Request Body" body=""
	I1202 21:14:26.795787  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:26.796101  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:27.294827  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.294901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:27.295286  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:27.794831  307731 type.go:168] "Request Body" body=""
	I1202 21:14:27.794916  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:27.795255  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.294946  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.295278  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:28.794910  307731 type.go:168] "Request Body" body=""
	I1202 21:14:28.794997  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:28.795366  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:29.295050  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.295134  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:29.295536  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:29.795762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:29.795842  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:29.796119  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.295026  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.295100  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.295424  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:30.795136  307731 type.go:168] "Request Body" body=""
	I1202 21:14:30.795210  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:30.795534  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:31.295356  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.295420  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.295666  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:31.295705  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:31.795445  307731 type.go:168] "Request Body" body=""
	I1202 21:14:31.795523  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:31.795898  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.295545  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.295621  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.295915  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:32.795216  307731 type.go:168] "Request Body" body=""
	I1202 21:14:32.795294  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:32.795544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.294979  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.295290  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:33.795032  307731 type.go:168] "Request Body" body=""
	I1202 21:14:33.795113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:33.795460  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:33.795521  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:34.294847  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.294919  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.295175  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:34.794878  307731 type.go:168] "Request Body" body=""
	I1202 21:14:34.794952  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:34.795309  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.295020  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.295113  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.295444  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:35.795727  307731 type.go:168] "Request Body" body=""
	I1202 21:14:35.795796  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:35.796110  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:35.796169  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:36.294868  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.294941  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.295256  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:36.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:36.795012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:36.795362  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.294916  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.294983  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.295233  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:37.794890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:37.794972  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:37.795286  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:38.294932  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.295012  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.295350  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:38.295411  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:38.795081  307731 type.go:168] "Request Body" body=""
	I1202 21:14:38.795152  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:38.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.294924  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.294999  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.295318  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:39.795068  307731 type.go:168] "Request Body" body=""
	I1202 21:14:39.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:39.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:40.295494  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.295565  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.295837  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:40.295880  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:40.795619  307731 type.go:168] "Request Body" body=""
	I1202 21:14:40.795692  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:40.796024  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.294908  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.294987  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.295358  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:41.795647  307731 type.go:168] "Request Body" body=""
	I1202 21:14:41.795719  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:41.795987  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:42.295806  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.295896  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.296282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:42.296340  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:42.794938  307731 type.go:168] "Request Body" body=""
	I1202 21:14:42.795011  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:42.795349  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.295078  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.295167  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.295472  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:43.794967  307731 type.go:168] "Request Body" body=""
	I1202 21:14:43.795039  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:43.795367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.295072  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.295155  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.295479  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:44.795154  307731 type.go:168] "Request Body" body=""
	I1202 21:14:44.795226  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:44.795482  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:44.795526  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:45.295089  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.295173  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.295911  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:45.795805  307731 type.go:168] "Request Body" body=""
	I1202 21:14:45.795885  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:45.796164  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.295025  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.295107  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.295367  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:46.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:46.795023  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:46.795381  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:47.294941  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.295306  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:47.295354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:47.794995  307731 type.go:168] "Request Body" body=""
	I1202 21:14:47.795072  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:47.795339  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.294900  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:48.794879  307731 type.go:168] "Request Body" body=""
	I1202 21:14:48.794954  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:48.795282  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.294865  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.294942  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.295208  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:49.794904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:49.794976  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:49.795293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:49.795354  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:50.295309  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.295381  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.295715  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:50.794949  307731 type.go:168] "Request Body" body=""
	I1202 21:14:50.795017  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:50.795263  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.294934  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.295008  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.295359  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:51.795069  307731 type.go:168] "Request Body" body=""
	I1202 21:14:51.795153  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:51.795518  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:51.795574  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:52.295227  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.295298  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.295552  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:52.794919  307731 type.go:168] "Request Body" body=""
	I1202 21:14:52.794995  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:52.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.295040  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.295116  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.295449  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:53.794876  307731 type.go:168] "Request Body" body=""
	I1202 21:14:53.794947  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:53.795218  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:54.294904  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.294975  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.295320  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:54.295377  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:54.795076  307731 type.go:168] "Request Body" body=""
	I1202 21:14:54.795150  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:54.795490  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.295170  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.295241  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.295544  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:55.794930  307731 type.go:168] "Request Body" body=""
	I1202 21:14:55.795021  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:55.795317  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:56.295064  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.295148  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.295496  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:56.295551  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:56.795788  307731 type.go:168] "Request Body" body=""
	I1202 21:14:56.795881  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:56.796235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.294957  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.295029  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:57.795079  307731 type.go:168] "Request Body" body=""
	I1202 21:14:57.795157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:57.795491  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:58.295762  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.295829  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.296084  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:14:58.296124  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:14:58.795828  307731 type.go:168] "Request Body" body=""
	I1202 21:14:58.795901  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:58.796192  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.294890  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.294971  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.295293  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:14:59.795662  307731 type.go:168] "Request Body" body=""
	I1202 21:14:59.795732  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:14:59.795995  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.294841  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.294929  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.295288  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:00.794964  307731 type.go:168] "Request Body" body=""
	I1202 21:15:00.795065  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:00.795443  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:00.795520  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:01.295565  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.295641  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.295933  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:01.795669  307731 type.go:168] "Request Body" body=""
	I1202 21:15:01.795744  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:01.796077  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.294851  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.294928  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.295300  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:02.794984  307731 type.go:168] "Request Body" body=""
	I1202 21:15:02.795058  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:02.795384  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:03.294942  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.295015  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.295368  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:03.295426  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:03.794971  307731 type.go:168] "Request Body" body=""
	I1202 21:15:03.795053  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:03.795395  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.295082  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.295157  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.295429  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:04.794958  307731 type.go:168] "Request Body" body=""
	I1202 21:15:04.795043  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:04.795426  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.294930  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.295018  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.295356  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:05.795116  307731 type.go:168] "Request Body" body=""
	I1202 21:15:05.795195  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:05.795515  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:05.795575  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:06.295370  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.295451  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.295771  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:06.795538  307731 type.go:168] "Request Body" body=""
	I1202 21:15:06.795617  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:06.795962  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.295701  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.295775  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.296023  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:07.795795  307731 type.go:168] "Request Body" body=""
	I1202 21:15:07.795872  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:07.796194  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:07.796261  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:08.294940  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.295013  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.295374  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:08.794862  307731 type.go:168] "Request Body" body=""
	I1202 21:15:08.794931  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:08.795235  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.294931  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.295007  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.295352  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:09.795086  307731 type.go:168] "Request Body" body=""
	I1202 21:15:09.795162  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:09.795514  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:10.299197  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.299301  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.299703  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1202 21:15:10.299761  307731 node_ready.go:55] error getting node "functional-753958" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753958": dial tcp 192.168.49.2:8441: connect: connection refused
	I1202 21:15:10.795524  307731 type.go:168] "Request Body" body=""
	I1202 21:15:10.795615  307731 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753958" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1202 21:15:10.796019  307731 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1202 21:15:11.294844  307731 node_ready.go:38] duration metric: took 6m0.000140797s for node "functional-753958" to be "Ready" ...
	I1202 21:15:11.298019  307731 out.go:203] 
	W1202 21:15:11.300907  307731 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 21:15:11.300927  307731 out.go:285] * 
	W1202 21:15:11.303086  307731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:15:11.306181  307731 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:18 functional-753958 containerd[5832]: time="2025-12-02T21:15:18.696151478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.710675008Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.713009436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.720983824Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:19 functional-753958 containerd[5832]: time="2025-12-02T21:15:19.721785211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.649936722Z" level=info msg="No images store for sha256:d59db7295a44a54f2e51ebe8901f849af948acf4a9ad318dd4f11a213e39357b"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.652234655Z" level=info msg="ImageCreate event name:\"docker.io/library/minikube-local-cache-test:functional-753958\""
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.658987271Z" level=info msg="ImageCreate event name:\"sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:20 functional-753958 containerd[5832]: time="2025-12-02T21:15:20.659306465Z" level=info msg="ImageUpdate event name:\"docker.io/library/minikube-local-cache-test:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.473336839Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.475778775Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.477921003Z" level=info msg="ImageDelete event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\""
	Dec 02 21:15:21 functional-753958 containerd[5832]: time="2025-12-02T21:15:21.489964520Z" level=info msg="RemoveImage \"registry.k8s.io/pause:latest\" returns successfully"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.443219933Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.452940279Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.455391709Z" level=info msg="ImageDelete event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.472181387Z" level=info msg="RemoveImage \"registry.k8s.io/pause:3.1\" returns successfully"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.570974770Z" level=info msg="No images store for sha256:a1f83055284ec302ac691d8677946d8b4e772fb7071d39ada1cc9184cb70814b"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.573211716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:latest\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.582176007Z" level=info msg="ImageCreate event name:\"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.582709012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.697261746Z" level=info msg="No images store for sha256:3ac89611d5efd8eb74174b1f04c33b7e73b651cec35b5498caf0cfdd2efd7d48"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.699334036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.1\""
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.711113544Z" level=info msg="ImageCreate event name:\"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:15:22 functional-753958 containerd[5832]: time="2025-12-02T21:15:22.711439869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:15:26.800965    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:26.801920    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:26.803073    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:26.804658    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:15:26.805080    9934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:15:26 up  2:57,  0 user,  load average: 0.79, 0.46, 0.91
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:15:23 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:24 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 02 21:15:24 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:24 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:24 functional-753958 kubelet[9807]: E1202 21:15:24.630416    9807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:24 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:24 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:25 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 02 21:15:25 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:25 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:25 functional-753958 kubelet[9815]: E1202 21:15:25.344109    9815 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:25 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:25 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 02 21:15:26 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:26 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:26 functional-753958 kubelet[9849]: E1202 21:15:26.121117    9849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 829.
	Dec 02 21:15:26 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:26 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:15:26 functional-753958 kubelet[9938]: E1202 21:15:26.857919    9938 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:15:26 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (371.216497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 21:17:36.513873  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:19:44.125818  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:21:07.193711  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:22:36.514100  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:24:44.125832  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:27:36.513760  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m14.539570677s)

                                                
                                                
-- stdout --
	* [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000055683s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m14.541009486s for "functional-753958" cluster.
I1202 21:27:42.298438  263241 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (326.510017ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446665 image ls --format short --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format json --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format table --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh     │ functional-446665 ssh pgrep buildkitd                                                                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image   │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                  │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls                                                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete  │ -p functional-446665                                                                                                                                    │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start   │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start   │ -p functional-753958 --alsologtostderr -v=8                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:latest                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add minikube-local-cache-test:functional-753958                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache delete minikube-local-cache-test:functional-753958                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl images                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ cache   │ functional-753958 cache reload                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ kubectl │ functional-753958 kubectl -- --context functional-753958 get pods                                                                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ start   │ -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:15:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:15:27.807151  313474 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:15:27.807260  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807264  313474 out.go:374] Setting ErrFile to fd 2...
	I1202 21:15:27.807268  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807610  313474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:15:27.808015  313474 out.go:368] Setting JSON to false
	I1202 21:15:27.809366  313474 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10666,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:15:27.809431  313474 start.go:143] virtualization:  
	I1202 21:15:27.812823  313474 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:15:27.815796  313474 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:15:27.816009  313474 notify.go:221] Checking for updates...
	I1202 21:15:27.821378  313474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:15:27.824158  313474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:15:27.826979  313474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:15:27.829780  313474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:15:27.832616  313474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:15:27.835951  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:27.836043  313474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:15:27.868236  313474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:15:27.868329  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.931411  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.921542243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.931507  313474 docker.go:319] overlay module found
	I1202 21:15:27.934670  313474 out.go:179] * Using the docker driver based on existing profile
	I1202 21:15:27.937620  313474 start.go:309] selected driver: docker
	I1202 21:15:27.937631  313474 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.937764  313474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:15:27.937862  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.995269  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.986382161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.995660  313474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:15:27.995688  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:27.995745  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:27.995788  313474 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.998840  313474 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:15:28.001915  313474 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:15:28.005631  313474 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:15:28.008845  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:28.008946  313474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:15:28.029517  313474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:15:28.029530  313474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:15:28.078709  313474 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:15:28.277463  313474 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:15:28.277635  313474 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:15:28.277718  313474 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277817  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:15:28.277826  313474 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.54µs
	I1202 21:15:28.277840  313474 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:15:28.277851  313474 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277891  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:15:28.277896  313474 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.374µs
	I1202 21:15:28.277901  313474 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277913  313474 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:15:28.277910  313474 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277949  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:15:28.277954  313474 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.659µs
	I1202 21:15:28.277951  313474 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277959  313474 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277969  313474 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277991  313474 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-753958"
	I1202 21:15:28.277998  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:15:28.278004  313474 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:15:28.278003  313474 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.797µs
	I1202 21:15:28.278008  313474 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278008  313474 fix.go:54] fixHost starting: 
	I1202 21:15:28.278015  313474 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278051  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:15:28.278067  313474 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.63µs
	I1202 21:15:28.278075  313474 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278084  313474 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278133  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:15:28.278144  313474 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 58.148µs
	I1202 21:15:28.278154  313474 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:15:28.278163  313474 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278201  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:15:28.278206  313474 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 44.323µs
	I1202 21:15:28.278211  313474 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:15:28.278227  313474 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278272  313474 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:15:28.278274  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:15:28.278279  313474 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 53.693µs
	I1202 21:15:28.278284  313474 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:15:28.278293  313474 cache.go:87] Successfully saved all images to host disk.
	I1202 21:15:28.303149  313474 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:15:28.303168  313474 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:15:28.306592  313474 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:15:28.306620  313474 machine.go:94] provisionDockerMachine start ...
	I1202 21:15:28.306711  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.331641  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.331992  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.331999  313474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:15:28.485262  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.485277  313474 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:15:28.485346  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.502136  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.502454  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.502463  313474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:15:28.662872  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.662941  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.680996  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.681283  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.681296  313474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:15:28.829833  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:15:28.829849  313474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:15:28.829870  313474 ubuntu.go:190] setting up certificates
	I1202 21:15:28.829878  313474 provision.go:84] configureAuth start
	I1202 21:15:28.829936  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:28.847119  313474 provision.go:143] copyHostCerts
	I1202 21:15:28.847182  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:15:28.847194  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:15:28.847267  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:15:28.847367  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:15:28.847372  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:15:28.847403  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:15:28.847459  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:15:28.847462  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:15:28.847485  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:15:28.847574  313474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:15:28.960674  313474 provision.go:177] copyRemoteCerts
	I1202 21:15:28.960733  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:15:28.960772  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.978043  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.081719  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:15:29.105765  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:15:29.122371  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:15:29.139343  313474 provision.go:87] duration metric: took 309.452187ms to configureAuth
	I1202 21:15:29.139359  313474 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:15:29.139545  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:29.139550  313474 machine.go:97] duration metric: took 832.92543ms to provisionDockerMachine
	I1202 21:15:29.139557  313474 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:15:29.139567  313474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:15:29.139623  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:15:29.139660  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.156608  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.261796  313474 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:15:29.265154  313474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:15:29.265170  313474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:15:29.265181  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:15:29.265234  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:15:29.265309  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:15:29.265381  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:15:29.265422  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:15:29.272853  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:29.290463  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:15:29.307373  313474 start.go:296] duration metric: took 167.802474ms for postStartSetup
	I1202 21:15:29.307459  313474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:15:29.307497  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.324791  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.426726  313474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:15:29.431481  313474 fix.go:56] duration metric: took 1.153466989s for fixHost
	I1202 21:15:29.431495  313474 start.go:83] releasing machines lock for "functional-753958", held for 1.153497537s
	I1202 21:15:29.431566  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:29.447801  313474 ssh_runner.go:195] Run: cat /version.json
	I1202 21:15:29.447846  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.447885  313474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:15:29.447935  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.467421  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.471596  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.659911  313474 ssh_runner.go:195] Run: systemctl --version
	I1202 21:15:29.666244  313474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:15:29.670444  313474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:15:29.670514  313474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:15:29.678098  313474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:15:29.678112  313474 start.go:496] detecting cgroup driver to use...
	I1202 21:15:29.678141  313474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:15:29.678186  313474 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:15:29.694041  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:15:29.710665  313474 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:15:29.710716  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:15:29.728421  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:15:29.743568  313474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:15:29.860902  313474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:15:29.989688  313474 docker.go:234] disabling docker service ...
	I1202 21:15:29.989770  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:15:30.008558  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:15:30.033480  313474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:15:30.168415  313474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:15:30.289508  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:15:30.302465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:15:30.316926  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:15:30.325512  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:15:30.334372  313474 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:15:30.334439  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:15:30.343106  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.351679  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:15:30.359860  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.368460  313474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:15:30.376324  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:15:30.384579  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:15:30.393108  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:15:30.401480  313474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:15:30.408867  313474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:15:30.415924  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.533792  313474 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:15:30.657833  313474 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:15:30.657894  313474 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:15:30.661737  313474 start.go:564] Will wait 60s for crictl version
	I1202 21:15:30.661805  313474 ssh_runner.go:195] Run: which crictl
	I1202 21:15:30.665271  313474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:15:30.691831  313474 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:15:30.691893  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.710586  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.734130  313474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:15:30.737177  313474 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:15:30.753095  313474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:15:30.760367  313474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:15:30.763216  313474 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:15:30.763354  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:30.763426  313474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:15:30.788120  313474 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:15:30.788132  313474 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:15:30.788138  313474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:15:30.788245  313474 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:15:30.788311  313474 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:15:30.816149  313474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:15:30.816166  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:30.816175  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:30.816190  313474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:15:30.816220  313474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:15:30.816350  313474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:15:30.816417  313474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:15:30.824592  313474 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:15:30.824650  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:15:30.832172  313474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:15:30.844549  313474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:15:30.856965  313474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1202 21:15:30.869111  313474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:15:30.872973  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.993888  313474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:15:31.292555  313474 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:15:31.292567  313474 certs.go:195] generating shared ca certs ...
	I1202 21:15:31.292581  313474 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:15:31.292714  313474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:15:31.292766  313474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:15:31.292772  313474 certs.go:257] generating profile certs ...
	I1202 21:15:31.292864  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:15:31.292921  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:15:31.292963  313474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:15:31.293076  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:15:31.293105  313474 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:15:31.293112  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:15:31.293138  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:15:31.293160  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:15:31.293184  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:15:31.293230  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:31.293875  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:15:31.313092  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:15:31.332062  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:15:31.351302  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:15:31.370658  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:15:31.387720  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:15:31.405248  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:15:31.422664  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:15:31.440135  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:15:31.457687  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:15:31.475495  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:15:31.492183  313474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:15:31.504166  313474 ssh_runner.go:195] Run: openssl version
	I1202 21:15:31.510525  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:15:31.518840  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522541  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522596  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.563265  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:15:31.571112  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:15:31.579437  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583195  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583250  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.628890  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:15:31.636777  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:15:31.644711  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648206  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648271  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.689010  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:15:31.696812  313474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:15:31.700482  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:15:31.740999  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:15:31.782731  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:15:31.823250  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:15:31.865611  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:15:31.906492  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:15:31.947359  313474 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:31.947441  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:15:31.947511  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:31.973182  313474 cri.go:89] found id: ""
	I1202 21:15:31.973243  313474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:15:31.980768  313474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:15:31.980777  313474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:15:31.980838  313474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:15:31.988019  313474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:31.988518  313474 kubeconfig.go:125] found "functional-753958" server: "https://192.168.49.2:8441"
	I1202 21:15:31.989827  313474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:15:31.997696  313474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:00:56.754776837 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:15:30.864977782 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:15:31.997711  313474 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:15:31.997724  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1202 21:15:31.997791  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:32.028400  313474 cri.go:89] found id: ""
	I1202 21:15:32.028460  313474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:15:32.046252  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:15:32.054174  313474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 21:05 /etc/kubernetes/scheduler.conf
	
	I1202 21:15:32.054235  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:15:32.061845  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:15:32.069217  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.069283  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:15:32.076901  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.084278  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.084333  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.091360  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:15:32.098582  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.098635  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:15:32.105786  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:15:32.113101  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:32.157271  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.778908  313474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.621612732s)
	I1202 21:15:33.778983  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.980110  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.046494  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.096642  313474 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:15:34.096721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:34.596907  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.097723  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.597306  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.096830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.096902  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.597594  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.097418  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.596863  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.096945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.596885  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.097285  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.597766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.097086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.597610  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.097762  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.597458  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.097372  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.596919  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.096844  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.597785  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.097138  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.597877  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.096835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.596922  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.097709  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.597777  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.097634  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.597037  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.097698  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.597298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.097150  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.596854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.097637  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.097490  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.597734  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.097878  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.597585  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.097045  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.596935  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.096967  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.597277  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.097741  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.597498  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.097835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.596980  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.097825  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.597397  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.097737  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.597771  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.097000  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.096857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.596807  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.096858  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.596921  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.097782  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.597168  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.097826  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.597834  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.597015  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.097323  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.596890  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.096868  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.597441  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.097848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.596805  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.096809  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.597086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.097186  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.597613  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.096962  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.596871  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.097854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.596857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.096839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.596917  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.097213  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.596830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.097886  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.597752  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.096793  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.597667  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.096901  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.597296  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.097838  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.597565  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.097476  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.597700  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.597010  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.097503  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.596848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.096818  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.596913  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.097537  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.596855  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.096911  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.596909  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.097013  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.596904  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.097839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.596939  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.097272  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.597856  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.097301  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.596953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.096893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.597192  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.097860  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.597517  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.097502  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.597497  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.097081  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.597504  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.097354  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:34.097219  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:34.097318  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:34.124123  313474 cri.go:89] found id: ""
	I1202 21:16:34.124137  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.124144  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:34.124150  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:34.124209  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:34.149042  313474 cri.go:89] found id: ""
	I1202 21:16:34.149056  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.149063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:34.149069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:34.149127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:34.172796  313474 cri.go:89] found id: ""
	I1202 21:16:34.172810  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.172817  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:34.172823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:34.172888  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:34.199775  313474 cri.go:89] found id: ""
	I1202 21:16:34.199789  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.199796  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:34.199801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:34.199858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:34.223410  313474 cri.go:89] found id: ""
	I1202 21:16:34.223424  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.223431  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:34.223436  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:34.223542  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:34.248663  313474 cri.go:89] found id: ""
	I1202 21:16:34.248677  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.248683  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:34.248689  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:34.248747  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:34.272612  313474 cri.go:89] found id: ""
	I1202 21:16:34.272626  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.272633  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:34.272641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:34.272650  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:34.304889  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:34.304905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:34.363275  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:34.363294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:34.379039  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:34.379054  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:34.446716  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:34.446728  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:34.446739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.010773  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:37.023010  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:37.023081  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:37.074765  313474 cri.go:89] found id: ""
	I1202 21:16:37.074779  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.074786  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:37.074791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:37.074849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:37.105604  313474 cri.go:89] found id: ""
	I1202 21:16:37.105617  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.105624  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:37.105630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:37.105731  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:37.135381  313474 cri.go:89] found id: ""
	I1202 21:16:37.135395  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.135402  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:37.135407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:37.135465  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:37.159378  313474 cri.go:89] found id: ""
	I1202 21:16:37.159391  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.159398  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:37.159404  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:37.159460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:37.184079  313474 cri.go:89] found id: ""
	I1202 21:16:37.184093  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.184100  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:37.184105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:37.184266  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:37.208512  313474 cri.go:89] found id: ""
	I1202 21:16:37.208526  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.208533  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:37.208539  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:37.208598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:37.231722  313474 cri.go:89] found id: ""
	I1202 21:16:37.231735  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.231742  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:37.231750  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:37.231760  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:37.247154  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:37.247171  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:37.311439  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:37.311449  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:37.311459  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.374896  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:37.374916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:37.402545  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:37.402561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:39.959953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:39.969383  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:39.969445  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:39.998437  313474 cri.go:89] found id: ""
	I1202 21:16:39.998450  313474 logs.go:282] 0 containers: []
	W1202 21:16:39.998457  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:39.998463  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:39.998519  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:40.079783  313474 cri.go:89] found id: ""
	I1202 21:16:40.079799  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.079807  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:40.079813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:40.079882  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:40.112177  313474 cri.go:89] found id: ""
	I1202 21:16:40.112203  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.112210  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:40.112217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:40.112289  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:40.148805  313474 cri.go:89] found id: ""
	I1202 21:16:40.148820  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.148828  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:40.148834  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:40.148918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:40.180826  313474 cri.go:89] found id: ""
	I1202 21:16:40.180841  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.180848  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:40.180855  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:40.180930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:40.209004  313474 cri.go:89] found id: ""
	I1202 21:16:40.209018  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.209025  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:40.209032  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:40.209091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:40.234748  313474 cri.go:89] found id: ""
	I1202 21:16:40.234762  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.234769  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:40.234778  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:40.234788  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:40.297246  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:40.297257  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:40.297268  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:40.359276  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:40.359297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:40.389165  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:40.389181  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:40.447977  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:40.447997  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:42.964946  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:42.974927  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:42.974987  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:42.997720  313474 cri.go:89] found id: ""
	I1202 21:16:42.997734  313474 logs.go:282] 0 containers: []
	W1202 21:16:42.997741  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:42.997747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:42.997808  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:43.022947  313474 cri.go:89] found id: ""
	I1202 21:16:43.022961  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.022968  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:43.022973  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:43.023034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:43.053855  313474 cri.go:89] found id: ""
	I1202 21:16:43.053869  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.053876  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:43.053881  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:43.053941  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:43.086462  313474 cri.go:89] found id: ""
	I1202 21:16:43.086475  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.086482  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:43.086487  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:43.086545  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:43.112776  313474 cri.go:89] found id: ""
	I1202 21:16:43.112790  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.112798  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:43.112803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:43.112861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:43.137549  313474 cri.go:89] found id: ""
	I1202 21:16:43.137563  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.137570  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:43.137576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:43.137695  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:43.161710  313474 cri.go:89] found id: ""
	I1202 21:16:43.161724  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.161731  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:43.161739  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:43.161751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:43.217891  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:43.217910  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:43.233516  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:43.233539  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:43.295127  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:43.295145  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:43.295157  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:43.361614  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:43.361638  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:45.891122  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:45.901162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:45.901219  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:45.924968  313474 cri.go:89] found id: ""
	I1202 21:16:45.924982  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.924989  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:45.924994  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:45.925064  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:45.960327  313474 cri.go:89] found id: ""
	I1202 21:16:45.960350  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.960357  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:45.960362  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:45.960428  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:45.988303  313474 cri.go:89] found id: ""
	I1202 21:16:45.988317  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.988324  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:45.988330  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:45.988395  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:46.015569  313474 cri.go:89] found id: ""
	I1202 21:16:46.015582  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.015590  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:46.015595  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:46.015656  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:46.042481  313474 cri.go:89] found id: ""
	I1202 21:16:46.042494  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.042511  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:46.042517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:46.042583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:46.076870  313474 cri.go:89] found id: ""
	I1202 21:16:46.076910  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.076918  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:46.076924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:46.076995  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:46.110449  313474 cri.go:89] found id: ""
	I1202 21:16:46.110490  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.110498  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:46.110514  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:46.110525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:46.188559  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:46.188579  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:46.188590  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:46.253578  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:46.253598  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:46.281754  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:46.281771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:46.338833  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:46.338850  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:48.855152  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:48.865294  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:48.865357  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:48.889825  313474 cri.go:89] found id: ""
	I1202 21:16:48.889839  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.889846  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:48.889852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:48.889911  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:48.913688  313474 cri.go:89] found id: ""
	I1202 21:16:48.913705  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.913712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:48.913718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:48.913781  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:48.937742  313474 cri.go:89] found id: ""
	I1202 21:16:48.937756  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.937763  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:48.937779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:48.937837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:48.961294  313474 cri.go:89] found id: ""
	I1202 21:16:48.961308  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.961315  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:48.961320  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:48.961378  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:48.985846  313474 cri.go:89] found id: ""
	I1202 21:16:48.985860  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.985866  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:48.985872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:48.985930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:49.014392  313474 cri.go:89] found id: ""
	I1202 21:16:49.014405  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.014412  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:49.014418  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:49.014478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:49.038987  313474 cri.go:89] found id: ""
	I1202 21:16:49.039000  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.039006  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:49.039014  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:49.039024  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:49.102227  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:49.102246  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:49.120563  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:49.120579  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:49.183266  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:49.183286  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:49.183297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:49.246439  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:49.246458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:51.775321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:51.785184  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:51.785254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:51.809810  313474 cri.go:89] found id: ""
	I1202 21:16:51.809824  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.809831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:51.809837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:51.809900  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:51.835767  313474 cri.go:89] found id: ""
	I1202 21:16:51.835795  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.835802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:51.835808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:51.835866  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:51.865885  313474 cri.go:89] found id: ""
	I1202 21:16:51.865900  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.865914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:51.865920  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:51.865980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:51.891809  313474 cri.go:89] found id: ""
	I1202 21:16:51.891823  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.891831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:51.891837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:51.891898  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:51.916253  313474 cri.go:89] found id: ""
	I1202 21:16:51.916267  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.916274  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:51.916280  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:51.916349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:51.941007  313474 cri.go:89] found id: ""
	I1202 21:16:51.941021  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.941028  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:51.941034  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:51.941093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:51.969353  313474 cri.go:89] found id: ""
	I1202 21:16:51.969368  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.969375  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:51.969382  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:51.969393  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:52.025261  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:52.025287  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:52.045534  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:52.045551  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:52.124972  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:52.124982  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:52.124993  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:52.189351  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:52.189372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:54.721393  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:54.732232  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:54.732290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:54.757594  313474 cri.go:89] found id: ""
	I1202 21:16:54.757608  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.757630  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:54.757671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:54.757734  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:54.783381  313474 cri.go:89] found id: ""
	I1202 21:16:54.783395  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.783402  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:54.783407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:54.783480  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:54.808177  313474 cri.go:89] found id: ""
	I1202 21:16:54.808198  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.808205  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:54.808211  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:54.808291  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:54.831293  313474 cri.go:89] found id: ""
	I1202 21:16:54.831307  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.831314  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:54.831331  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:54.831399  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:54.854343  313474 cri.go:89] found id: ""
	I1202 21:16:54.854357  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.854363  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:54.854368  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:54.854427  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:54.882636  313474 cri.go:89] found id: ""
	I1202 21:16:54.882650  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.882667  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:54.882673  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:54.882739  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:54.911098  313474 cri.go:89] found id: ""
	I1202 21:16:54.911112  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.911120  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:54.911128  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:54.911138  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:54.970728  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:54.970746  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:54.986382  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:54.986399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:55.069421  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:55.069437  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:55.069448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:55.151228  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:55.151266  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:57.687319  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:57.696959  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:57.697017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:57.720719  313474 cri.go:89] found id: ""
	I1202 21:16:57.720733  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.720740  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:57.720746  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:57.720811  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:57.749778  313474 cri.go:89] found id: ""
	I1202 21:16:57.749792  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.749800  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:57.749805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:57.749863  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:57.772871  313474 cri.go:89] found id: ""
	I1202 21:16:57.772884  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.772891  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:57.772896  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:57.772954  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:57.799916  313474 cri.go:89] found id: ""
	I1202 21:16:57.799931  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.799937  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:57.799943  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:57.800000  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:57.827165  313474 cri.go:89] found id: ""
	I1202 21:16:57.827179  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.827186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:57.827191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:57.827248  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:57.852136  313474 cri.go:89] found id: ""
	I1202 21:16:57.852150  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.852157  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:57.852166  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:57.852222  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:57.876624  313474 cri.go:89] found id: ""
	I1202 21:16:57.876638  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.876645  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:57.876654  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:57.876664  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:57.940462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:57.940473  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:57.940483  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:58.004519  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:58.004544  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:58.036463  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:58.036479  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:58.096205  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:58.096223  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.618984  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:00.629839  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:00.629906  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:00.661470  313474 cri.go:89] found id: ""
	I1202 21:17:00.661490  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.661498  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:00.661505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:00.661578  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:00.689166  313474 cri.go:89] found id: ""
	I1202 21:17:00.689182  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.689189  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:00.689202  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:00.689273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:00.716048  313474 cri.go:89] found id: ""
	I1202 21:17:00.716063  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.716070  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:00.716076  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:00.716143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:00.748003  313474 cri.go:89] found id: ""
	I1202 21:17:00.748017  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.748025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:00.748030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:00.748093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:00.779207  313474 cri.go:89] found id: ""
	I1202 21:17:00.779223  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.779231  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:00.779238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:00.779312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:00.805166  313474 cri.go:89] found id: ""
	I1202 21:17:00.805184  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.805194  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:00.805200  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:00.805273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:00.832311  313474 cri.go:89] found id: ""
	I1202 21:17:00.832326  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.832333  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:00.832342  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:00.832352  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:00.889599  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:00.889625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.906214  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:00.906230  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:00.978709  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:00.978720  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:00.978734  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:01.044083  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:01.044105  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.609427  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:03.620657  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:03.620726  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:03.651829  313474 cri.go:89] found id: ""
	I1202 21:17:03.651844  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.651851  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:03.651857  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:03.651923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:03.678868  313474 cri.go:89] found id: ""
	I1202 21:17:03.678889  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.678896  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:03.678902  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:03.678969  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:03.708792  313474 cri.go:89] found id: ""
	I1202 21:17:03.708806  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.708814  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:03.708820  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:03.708883  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:03.738501  313474 cri.go:89] found id: ""
	I1202 21:17:03.738516  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.738524  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:03.738531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:03.738604  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:03.770026  313474 cri.go:89] found id: ""
	I1202 21:17:03.770050  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.770057  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:03.770063  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:03.770127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:03.804285  313474 cri.go:89] found id: ""
	I1202 21:17:03.804300  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.804308  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:03.804324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:03.804391  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:03.831572  313474 cri.go:89] found id: ""
	I1202 21:17:03.831587  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.831594  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:03.831602  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:03.831613  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.860060  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:03.860086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:03.921719  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:03.921744  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:03.939033  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:03.939051  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:04.010810  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:04.010823  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:04.010835  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:06.576791  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:06.587693  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:06.587761  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:06.614477  313474 cri.go:89] found id: ""
	I1202 21:17:06.614493  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.614500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:06.614506  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:06.614571  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:06.641625  313474 cri.go:89] found id: ""
	I1202 21:17:06.641639  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.641646  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:06.641670  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:06.641735  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:06.667567  313474 cri.go:89] found id: ""
	I1202 21:17:06.667581  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.667588  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:06.667594  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:06.667657  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:06.694684  313474 cri.go:89] found id: ""
	I1202 21:17:06.694699  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.694706  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:06.694711  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:06.694777  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:06.723071  313474 cri.go:89] found id: ""
	I1202 21:17:06.723090  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.723097  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:06.723103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:06.723185  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:06.751448  313474 cri.go:89] found id: ""
	I1202 21:17:06.751462  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.751469  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:06.751476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:06.751544  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:06.781674  313474 cri.go:89] found id: ""
	I1202 21:17:06.781689  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.781697  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:06.781705  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:06.781723  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:06.812650  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:06.812669  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:06.874390  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:06.874410  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:06.891708  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:06.891726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:06.960203  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:06.960213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:06.960225  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:09.527222  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:09.537303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:09.537380  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:09.562091  313474 cri.go:89] found id: ""
	I1202 21:17:09.562112  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.562120  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:09.562125  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:09.562188  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:09.587772  313474 cri.go:89] found id: ""
	I1202 21:17:09.587786  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.587802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:09.587808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:09.587876  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:09.613205  313474 cri.go:89] found id: ""
	I1202 21:17:09.613224  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.613232  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:09.613238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:09.613298  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:09.639556  313474 cri.go:89] found id: ""
	I1202 21:17:09.639570  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.639577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:09.639583  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:09.639648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:09.668717  313474 cri.go:89] found id: ""
	I1202 21:17:09.668731  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.668737  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:09.668743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:09.668800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:09.692671  313474 cri.go:89] found id: ""
	I1202 21:17:09.692685  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.692693  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:09.692698  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:09.692756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:09.717454  313474 cri.go:89] found id: ""
	I1202 21:17:09.717468  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.717475  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:09.717484  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:09.717494  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:09.747114  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:09.747130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:09.803274  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:09.803294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:09.819246  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:09.819264  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:09.879465  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:09.879474  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:09.879485  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.443298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:12.453026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:12.453087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:12.479471  313474 cri.go:89] found id: ""
	I1202 21:17:12.479485  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.479492  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:12.479498  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:12.479559  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:12.503554  313474 cri.go:89] found id: ""
	I1202 21:17:12.503567  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.503575  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:12.503580  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:12.503637  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:12.528839  313474 cri.go:89] found id: ""
	I1202 21:17:12.528854  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.528861  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:12.528866  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:12.528943  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:12.553622  313474 cri.go:89] found id: ""
	I1202 21:17:12.553644  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.553663  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:12.553669  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:12.553737  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:12.579503  313474 cri.go:89] found id: ""
	I1202 21:17:12.579516  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.579523  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:12.579528  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:12.579583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:12.612312  313474 cri.go:89] found id: ""
	I1202 21:17:12.612327  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.612334  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:12.612339  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:12.612413  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:12.636613  313474 cri.go:89] found id: ""
	I1202 21:17:12.636628  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.636635  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:12.636642  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:12.636652  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:12.696881  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:12.696892  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:12.696903  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.758877  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:12.758898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:12.786233  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:12.786249  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:12.841290  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:12.841308  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.357945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:15.367765  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:15.367824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:15.395603  313474 cri.go:89] found id: ""
	I1202 21:17:15.395617  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.395624  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:15.395629  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:15.395688  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:15.418671  313474 cri.go:89] found id: ""
	I1202 21:17:15.418684  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.418691  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:15.418705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:15.418763  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:15.442594  313474 cri.go:89] found id: ""
	I1202 21:17:15.442607  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.442615  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:15.442624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:15.442680  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:15.466331  313474 cri.go:89] found id: ""
	I1202 21:17:15.466345  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.466352  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:15.466357  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:15.466416  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:15.491762  313474 cri.go:89] found id: ""
	I1202 21:17:15.491775  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.491782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:15.491788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:15.491847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:15.517473  313474 cri.go:89] found id: ""
	I1202 21:17:15.517487  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.517503  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:15.517509  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:15.517577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:15.544100  313474 cri.go:89] found id: ""
	I1202 21:17:15.544122  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.544129  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:15.544138  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:15.544148  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:15.570436  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:15.570453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:15.625879  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:15.625897  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.641070  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:15.641091  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:15.704897  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:15.704906  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:15.704916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:18.272120  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:18.282411  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:18.282474  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:18.310144  313474 cri.go:89] found id: ""
	I1202 21:17:18.310158  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.310165  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:18.310170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:18.310230  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:18.339624  313474 cri.go:89] found id: ""
	I1202 21:17:18.339637  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.339645  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:18.339650  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:18.339709  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:18.367236  313474 cri.go:89] found id: ""
	I1202 21:17:18.367252  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.367259  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:18.367265  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:18.367323  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:18.391197  313474 cri.go:89] found id: ""
	I1202 21:17:18.391213  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.391220  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:18.391226  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:18.391285  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:18.419753  313474 cri.go:89] found id: ""
	I1202 21:17:18.419768  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.419775  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:18.419780  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:18.419841  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:18.445569  313474 cri.go:89] found id: ""
	I1202 21:17:18.445588  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.445596  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:18.445601  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:18.445689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:18.471844  313474 cri.go:89] found id: ""
	I1202 21:17:18.471858  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.471865  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:18.471882  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:18.471893  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:18.500607  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:18.500623  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:18.556521  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:18.556540  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:18.572100  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:18.572115  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:18.637389  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:18.637399  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:18.637419  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:21.200861  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:21.210744  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:21.210815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:21.235330  313474 cri.go:89] found id: ""
	I1202 21:17:21.235344  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.235351  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:21.235356  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:21.235412  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:21.263273  313474 cri.go:89] found id: ""
	I1202 21:17:21.263287  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.263294  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:21.263299  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:21.263358  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:21.295429  313474 cri.go:89] found id: ""
	I1202 21:17:21.295443  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.295450  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:21.295455  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:21.295522  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:21.339988  313474 cri.go:89] found id: ""
	I1202 21:17:21.340017  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.340025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:21.340031  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:21.340094  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:21.366146  313474 cri.go:89] found id: ""
	I1202 21:17:21.366159  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.366166  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:21.366171  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:21.366234  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:21.396896  313474 cri.go:89] found id: ""
	I1202 21:17:21.396910  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.396917  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:21.396922  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:21.396980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:21.424236  313474 cri.go:89] found id: ""
	I1202 21:17:21.424249  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.424256  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:21.424273  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:21.424284  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:21.452897  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:21.452913  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:21.511384  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:21.511402  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:21.527095  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:21.527121  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:21.587938  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:21.587948  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:21.587958  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:24.156062  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:24.166297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:24.166383  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:24.194537  313474 cri.go:89] found id: ""
	I1202 21:17:24.194550  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.194558  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:24.194564  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:24.194624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:24.218699  313474 cri.go:89] found id: ""
	I1202 21:17:24.218714  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.218728  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:24.218734  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:24.218796  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:24.244266  313474 cri.go:89] found id: ""
	I1202 21:17:24.244280  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.244287  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:24.244292  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:24.244352  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:24.269104  313474 cri.go:89] found id: ""
	I1202 21:17:24.269117  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.269124  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:24.269129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:24.269186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:24.296650  313474 cri.go:89] found id: ""
	I1202 21:17:24.296663  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.296671  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:24.296677  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:24.296745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:24.323551  313474 cri.go:89] found id: ""
	I1202 21:17:24.323564  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.323572  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:24.323579  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:24.323648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:24.353085  313474 cri.go:89] found id: ""
	I1202 21:17:24.353109  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.353117  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:24.353126  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:24.353136  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:24.382045  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:24.382062  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:24.438756  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:24.438773  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:24.454650  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:24.454665  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:24.517340  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:24.517351  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:24.517371  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.081832  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:27.091605  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:27.091662  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:27.116713  313474 cri.go:89] found id: ""
	I1202 21:17:27.116726  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.116734  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:27.116739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:27.116801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:27.140809  313474 cri.go:89] found id: ""
	I1202 21:17:27.140823  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.140830  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:27.140835  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:27.140918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:27.167221  313474 cri.go:89] found id: ""
	I1202 21:17:27.167235  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.167242  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:27.167247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:27.167302  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:27.191660  313474 cri.go:89] found id: ""
	I1202 21:17:27.191674  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.191681  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:27.191686  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:27.191755  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:27.219696  313474 cri.go:89] found id: ""
	I1202 21:17:27.219719  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.219727  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:27.219732  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:27.219801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:27.247486  313474 cri.go:89] found id: ""
	I1202 21:17:27.247499  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.247506  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:27.247512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:27.247572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:27.270666  313474 cri.go:89] found id: ""
	I1202 21:17:27.270679  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.270687  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:27.270695  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:27.270704  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:27.329329  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:27.329349  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:27.350719  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:27.350735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:27.420274  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:27.420285  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:27.420338  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.487442  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:27.487462  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:30.014027  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:30.043373  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:30.043450  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:30.070998  313474 cri.go:89] found id: ""
	I1202 21:17:30.071012  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.071020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:30.071026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:30.071090  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:30.100616  313474 cri.go:89] found id: ""
	I1202 21:17:30.100630  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.100643  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:30.100649  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:30.100710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:30.130598  313474 cri.go:89] found id: ""
	I1202 21:17:30.130612  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.130620  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:30.130626  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:30.130687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:30.157465  313474 cri.go:89] found id: ""
	I1202 21:17:30.157479  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.157486  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:30.157492  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:30.157550  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:30.182842  313474 cri.go:89] found id: ""
	I1202 21:17:30.182857  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.182864  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:30.182870  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:30.182930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:30.211948  313474 cri.go:89] found id: ""
	I1202 21:17:30.211962  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.211969  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:30.211975  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:30.212034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:30.240992  313474 cri.go:89] found id: ""
	I1202 21:17:30.241006  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.241013  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:30.241020  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:30.241031  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:30.296604  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:30.296621  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:30.314431  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:30.314447  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:30.385351  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:30.385362  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:30.385372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:30.451748  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:30.451771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:32.983767  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:32.993977  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:32.994037  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:33.020180  313474 cri.go:89] found id: ""
	I1202 21:17:33.020195  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.020202  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:33.020208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:33.020280  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:33.048366  313474 cri.go:89] found id: ""
	I1202 21:17:33.048379  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.048386  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:33.048392  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:33.048453  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:33.075220  313474 cri.go:89] found id: ""
	I1202 21:17:33.075240  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.075247  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:33.075253  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:33.075326  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:33.099808  313474 cri.go:89] found id: ""
	I1202 21:17:33.099823  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.099831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:33.099837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:33.099897  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:33.124213  313474 cri.go:89] found id: ""
	I1202 21:17:33.124226  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.124233  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:33.124239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:33.124297  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:33.150102  313474 cri.go:89] found id: ""
	I1202 21:17:33.150116  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.150123  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:33.150129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:33.150190  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:33.174754  313474 cri.go:89] found id: ""
	I1202 21:17:33.174768  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.174775  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:33.174784  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:33.174794  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:33.243781  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:33.243791  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:33.243802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:33.306573  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:33.306592  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:33.336859  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:33.336876  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:33.398386  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:33.398404  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:35.914658  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:35.924718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:35.924778  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:35.950094  313474 cri.go:89] found id: ""
	I1202 21:17:35.950108  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.950114  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:35.950120  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:35.950182  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:35.974633  313474 cri.go:89] found id: ""
	I1202 21:17:35.974647  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.974654  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:35.974660  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:35.974719  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:35.998845  313474 cri.go:89] found id: ""
	I1202 21:17:35.998859  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.998866  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:35.998872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:35.998933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:36.027158  313474 cri.go:89] found id: ""
	I1202 21:17:36.027173  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.027186  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:36.027192  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:36.027259  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:36.052916  313474 cri.go:89] found id: ""
	I1202 21:17:36.052930  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.052937  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:36.052942  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:36.053002  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:36.078331  313474 cri.go:89] found id: ""
	I1202 21:17:36.078345  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.078353  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:36.078359  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:36.078421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:36.102917  313474 cri.go:89] found id: ""
	I1202 21:17:36.102935  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.102942  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:36.102952  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:36.102968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:36.170369  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:36.170381  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:36.170396  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:36.233123  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:36.233141  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:36.260318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:36.260336  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:36.318506  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:36.318525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:38.836941  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:38.847151  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:38.847224  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:38.875586  313474 cri.go:89] found id: ""
	I1202 21:17:38.875599  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.875606  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:38.875612  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:38.875671  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:38.898500  313474 cri.go:89] found id: ""
	I1202 21:17:38.898514  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.898530  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:38.898538  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:38.898601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:38.922709  313474 cri.go:89] found id: ""
	I1202 21:17:38.922723  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.922730  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:38.922735  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:38.922791  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:38.950687  313474 cri.go:89] found id: ""
	I1202 21:17:38.950701  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.950717  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:38.950723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:38.950789  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:38.973477  313474 cri.go:89] found id: ""
	I1202 21:17:38.973490  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.973506  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:38.973514  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:38.973590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:38.999179  313474 cri.go:89] found id: ""
	I1202 21:17:38.999193  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.999200  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:38.999206  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:38.999264  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:39.028981  313474 cri.go:89] found id: ""
	I1202 21:17:39.028995  313474 logs.go:282] 0 containers: []
	W1202 21:17:39.029002  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:39.029010  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:39.029019  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:39.091914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:39.091935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:39.118017  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:39.118033  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:39.174784  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:39.174803  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:39.190239  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:39.190254  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:39.253019  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:41.753253  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:41.763094  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:41.763167  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:41.787441  313474 cri.go:89] found id: ""
	I1202 21:17:41.787457  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.787464  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:41.787470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:41.787529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:41.815733  313474 cri.go:89] found id: ""
	I1202 21:17:41.815746  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.815753  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:41.815759  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:41.815819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:41.839039  313474 cri.go:89] found id: ""
	I1202 21:17:41.839053  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.839060  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:41.839065  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:41.839125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:41.867760  313474 cri.go:89] found id: ""
	I1202 21:17:41.867775  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.867783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:41.867796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:41.867860  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:41.894114  313474 cri.go:89] found id: ""
	I1202 21:17:41.894128  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.894135  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:41.894141  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:41.894202  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:41.918156  313474 cri.go:89] found id: ""
	I1202 21:17:41.918169  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.918177  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:41.918182  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:41.918242  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:41.942010  313474 cri.go:89] found id: ""
	I1202 21:17:41.942024  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.942032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:41.942040  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:41.942050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:41.971871  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:41.971886  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:42.031586  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:42.031606  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:42.050658  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:42.050675  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:42.125237  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:42.125249  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:42.125260  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:44.696530  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:44.706544  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:44.706605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:44.734450  313474 cri.go:89] found id: ""
	I1202 21:17:44.734464  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.734470  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:44.734476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:44.734535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:44.758091  313474 cri.go:89] found id: ""
	I1202 21:17:44.758104  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.758111  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:44.758116  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:44.758178  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:44.782611  313474 cri.go:89] found id: ""
	I1202 21:17:44.782624  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.782631  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:44.782637  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:44.782700  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:44.806667  313474 cri.go:89] found id: ""
	I1202 21:17:44.806681  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.806689  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:44.806695  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:44.806757  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:44.830007  313474 cri.go:89] found id: ""
	I1202 21:17:44.830021  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.830031  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:44.830036  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:44.830098  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:44.853880  313474 cri.go:89] found id: ""
	I1202 21:17:44.853894  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.853901  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:44.853907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:44.853970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:44.878619  313474 cri.go:89] found id: ""
	I1202 21:17:44.878633  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.878640  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:44.878647  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:44.878657  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:44.894269  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:44.894286  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:44.959621  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:44.959632  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:44.959645  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:45.023289  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:45.023311  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:45.085458  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:45.085476  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.687794  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:47.697486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:47.697557  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:47.723246  313474 cri.go:89] found id: ""
	I1202 21:17:47.723259  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.723266  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:47.723272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:47.723329  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:47.746713  313474 cri.go:89] found id: ""
	I1202 21:17:47.746726  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.746733  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:47.746739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:47.746798  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:47.771766  313474 cri.go:89] found id: ""
	I1202 21:17:47.771779  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.771786  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:47.771791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:47.771847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:47.795263  313474 cri.go:89] found id: ""
	I1202 21:17:47.795277  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.795284  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:47.795289  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:47.795349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:47.824522  313474 cri.go:89] found id: ""
	I1202 21:17:47.824536  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.824543  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:47.824548  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:47.824610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:47.849074  313474 cri.go:89] found id: ""
	I1202 21:17:47.849089  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.849096  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:47.849102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:47.849163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:47.878497  313474 cri.go:89] found id: ""
	I1202 21:17:47.878512  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.878518  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:47.878526  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:47.878537  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.935644  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:47.935663  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:47.951723  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:47.951739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:48.020401  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:48.020422  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:48.020434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:48.090722  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:48.090751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.621799  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:50.631705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:50.631774  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:50.656209  313474 cri.go:89] found id: ""
	I1202 21:17:50.656223  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.656230  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:50.656235  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:50.656300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:50.680929  313474 cri.go:89] found id: ""
	I1202 21:17:50.680943  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.680950  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:50.680955  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:50.681014  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:50.705769  313474 cri.go:89] found id: ""
	I1202 21:17:50.705783  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.705790  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:50.705796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:50.705858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:50.731506  313474 cri.go:89] found id: ""
	I1202 21:17:50.731519  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.731526  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:50.731531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:50.731588  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:50.754334  313474 cri.go:89] found id: ""
	I1202 21:17:50.754347  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.754354  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:50.754360  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:50.754421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:50.778142  313474 cri.go:89] found id: ""
	I1202 21:17:50.778154  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.778162  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:50.778170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:50.778228  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:50.801859  313474 cri.go:89] found id: ""
	I1202 21:17:50.801872  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.801880  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:50.801887  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:50.801898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:50.862528  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:50.862542  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:50.862553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:50.928955  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:50.928974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.960442  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:50.960458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:51.018671  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:51.018690  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.535533  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:53.550193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:53.550254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:53.579796  313474 cri.go:89] found id: ""
	I1202 21:17:53.579810  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.579817  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:53.579823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:53.579885  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:53.606043  313474 cri.go:89] found id: ""
	I1202 21:17:53.606057  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.606063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:53.606069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:53.606125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:53.631276  313474 cri.go:89] found id: ""
	I1202 21:17:53.631290  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.631297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:53.631303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:53.631360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:53.662387  313474 cri.go:89] found id: ""
	I1202 21:17:53.662400  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.662407  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:53.662412  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:53.662467  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:53.686744  313474 cri.go:89] found id: ""
	I1202 21:17:53.686758  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.686765  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:53.686771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:53.686832  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:53.710015  313474 cri.go:89] found id: ""
	I1202 21:17:53.710028  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.710035  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:53.710046  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:53.710102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:53.733042  313474 cri.go:89] found id: ""
	I1202 21:17:53.733056  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.733068  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:53.733076  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:53.733088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:53.789666  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:53.789726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.805097  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:53.805113  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:53.871790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:53.871801  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:53.871813  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:53.935260  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:53.935279  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:56.466348  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:56.476763  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:56.476830  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:56.501775  313474 cri.go:89] found id: ""
	I1202 21:17:56.501789  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.501795  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:56.501801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:56.501861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:56.526404  313474 cri.go:89] found id: ""
	I1202 21:17:56.526417  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.526424  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:56.526429  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:56.526487  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:56.555809  313474 cri.go:89] found id: ""
	I1202 21:17:56.555823  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.555845  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:56.555852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:56.555923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:56.586754  313474 cri.go:89] found id: ""
	I1202 21:17:56.586767  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.586794  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:56.586803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:56.586871  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:56.612048  313474 cri.go:89] found id: ""
	I1202 21:17:56.612061  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.612068  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:56.612074  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:56.612134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:56.636363  313474 cri.go:89] found id: ""
	I1202 21:17:56.636376  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.636383  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:56.636399  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:56.636456  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:56.668372  313474 cri.go:89] found id: ""
	I1202 21:17:56.668393  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.668400  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:56.668409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:56.668418  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:56.724439  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:56.724458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:56.740142  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:56.740161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:56.802960  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:56.802970  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:56.802981  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:56.870497  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:56.870516  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.400859  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:59.410723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:59.410792  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:59.434739  313474 cri.go:89] found id: ""
	I1202 21:17:59.434754  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.434761  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:59.434766  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:59.434823  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:59.459718  313474 cri.go:89] found id: ""
	I1202 21:17:59.459731  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.459738  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:59.459743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:59.459800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:59.484078  313474 cri.go:89] found id: ""
	I1202 21:17:59.484091  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.484098  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:59.484103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:59.484161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:59.510484  313474 cri.go:89] found id: ""
	I1202 21:17:59.510498  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.510505  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:59.510510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:59.510569  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:59.535191  313474 cri.go:89] found id: ""
	I1202 21:17:59.535204  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.535211  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:59.535217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:59.535278  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:59.566496  313474 cri.go:89] found id: ""
	I1202 21:17:59.566509  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.566516  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:59.566522  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:59.566591  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:59.605449  313474 cri.go:89] found id: ""
	I1202 21:17:59.605463  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.605470  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:59.605479  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:59.605492  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:59.670641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:59.670659  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.698362  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:59.698378  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:59.755057  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:59.755075  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:59.771334  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:59.771350  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:59.833359  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.334350  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:02.344576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:02.344646  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:02.372330  313474 cri.go:89] found id: ""
	I1202 21:18:02.372347  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.372355  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:02.372361  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:02.372421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:02.403621  313474 cri.go:89] found id: ""
	I1202 21:18:02.403635  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.403642  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:02.403648  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:02.403710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:02.432672  313474 cri.go:89] found id: ""
	I1202 21:18:02.432686  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.432693  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:02.432700  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:02.432762  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:02.464631  313474 cri.go:89] found id: ""
	I1202 21:18:02.464645  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.464652  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:02.464658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:02.464720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:02.491546  313474 cri.go:89] found id: ""
	I1202 21:18:02.491559  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.491566  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:02.491572  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:02.491628  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:02.515275  313474 cri.go:89] found id: ""
	I1202 21:18:02.515289  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.515296  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:02.515301  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:02.515361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:02.542560  313474 cri.go:89] found id: ""
	I1202 21:18:02.542574  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.542581  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:02.542589  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:02.542599  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:02.602107  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:02.602123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:02.624739  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:02.624757  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:02.689790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.689808  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:02.689819  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:02.752499  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:02.752518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.283528  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:05.293718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:05.293787  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:05.317745  313474 cri.go:89] found id: ""
	I1202 21:18:05.317758  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.317764  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:05.317770  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:05.317825  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:05.342721  313474 cri.go:89] found id: ""
	I1202 21:18:05.342735  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.342742  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:05.342747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:05.342805  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:05.367273  313474 cri.go:89] found id: ""
	I1202 21:18:05.367295  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.367303  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:05.367311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:05.367374  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:05.392617  313474 cri.go:89] found id: ""
	I1202 21:18:05.392630  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.392639  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:05.392644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:05.392720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:05.416853  313474 cri.go:89] found id: ""
	I1202 21:18:05.416866  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.416873  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:05.416878  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:05.416939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:05.440831  313474 cri.go:89] found id: ""
	I1202 21:18:05.440845  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.440852  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:05.440858  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:05.440925  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:05.468689  313474 cri.go:89] found id: ""
	I1202 21:18:05.468702  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.468709  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:05.468718  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:05.468728  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:05.532922  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:05.532931  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:05.532956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:05.603067  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:05.603086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.634107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:05.634125  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:05.690509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:05.690527  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.208420  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:08.218671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:08.218745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:08.244809  313474 cri.go:89] found id: ""
	I1202 21:18:08.244823  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.244831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:08.244837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:08.244895  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:08.270054  313474 cri.go:89] found id: ""
	I1202 21:18:08.270068  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.270075  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:08.270080  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:08.270145  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:08.295277  313474 cri.go:89] found id: ""
	I1202 21:18:08.295291  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.295298  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:08.295304  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:08.295366  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:08.319112  313474 cri.go:89] found id: ""
	I1202 21:18:08.319125  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.319132  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:08.319138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:08.319205  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:08.342874  313474 cri.go:89] found id: ""
	I1202 21:18:08.342888  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.342901  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:08.342908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:08.342965  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:08.371370  313474 cri.go:89] found id: ""
	I1202 21:18:08.371384  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.371391  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:08.371397  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:08.371464  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:08.396154  313474 cri.go:89] found id: ""
	I1202 21:18:08.396167  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.396175  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:08.396183  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:08.396193  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:08.451337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:08.451356  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.466550  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:08.466565  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:08.528549  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:08.528558  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:08.528569  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:08.606008  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:08.606028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:11.138262  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.148937  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:11.148998  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:11.173696  313474 cri.go:89] found id: ""
	I1202 21:18:11.173710  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.173718  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:11.173723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:11.173790  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:11.198792  313474 cri.go:89] found id: ""
	I1202 21:18:11.198805  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.198813  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:11.198818  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:11.198880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:11.222802  313474 cri.go:89] found id: ""
	I1202 21:18:11.222816  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.222823  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:11.222829  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:11.222890  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:11.247731  313474 cri.go:89] found id: ""
	I1202 21:18:11.247745  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.247752  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:11.247757  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:11.247814  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:11.272133  313474 cri.go:89] found id: ""
	I1202 21:18:11.272146  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.272153  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:11.272159  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:11.272217  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:11.296871  313474 cri.go:89] found id: ""
	I1202 21:18:11.296885  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.296892  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:11.296897  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:11.296958  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:11.321716  313474 cri.go:89] found id: ""
	I1202 21:18:11.321729  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.321736  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:11.321744  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:11.321754  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:11.377048  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:11.377066  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:11.393570  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:11.393587  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:11.458188  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:11.458204  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:11.458220  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:11.525584  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:11.525602  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.058201  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:14.068731  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:14.068793  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:14.095660  313474 cri.go:89] found id: ""
	I1202 21:18:14.095674  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.095682  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:14.095688  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:14.095754  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:14.122077  313474 cri.go:89] found id: ""
	I1202 21:18:14.122090  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.122097  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:14.122102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:14.122163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:14.150178  313474 cri.go:89] found id: ""
	I1202 21:18:14.150192  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.150199  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:14.150204  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:14.150265  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:14.175340  313474 cri.go:89] found id: ""
	I1202 21:18:14.175353  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.175360  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:14.175372  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:14.175431  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:14.199105  313474 cri.go:89] found id: ""
	I1202 21:18:14.199118  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.199125  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:14.199130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:14.199187  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:14.224274  313474 cri.go:89] found id: ""
	I1202 21:18:14.224288  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.224295  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:14.224300  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:14.224363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:14.251445  313474 cri.go:89] found id: ""
	I1202 21:18:14.251458  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.251465  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:14.251473  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:14.251487  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:14.320250  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:14.320261  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:14.320274  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:14.383255  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:14.383276  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.411409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:14.411425  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:14.472223  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:14.472248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:16.989804  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:17.000093  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:17.000155  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:17.028092  313474 cri.go:89] found id: ""
	I1202 21:18:17.028116  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.028124  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:17.028130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:17.028198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:17.052924  313474 cri.go:89] found id: ""
	I1202 21:18:17.052945  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.052952  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:17.052958  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:17.053029  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:17.078703  313474 cri.go:89] found id: ""
	I1202 21:18:17.078727  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.078734  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:17.078742  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:17.078812  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:17.104168  313474 cri.go:89] found id: ""
	I1202 21:18:17.104182  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.104189  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:17.104195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:17.104299  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:17.127996  313474 cri.go:89] found id: ""
	I1202 21:18:17.128010  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.128017  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:17.128023  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:17.128088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:17.152013  313474 cri.go:89] found id: ""
	I1202 21:18:17.152027  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.152034  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:17.152040  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:17.152100  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:17.180838  313474 cri.go:89] found id: ""
	I1202 21:18:17.180853  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.180860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:17.180868  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:17.180878  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:17.208724  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:17.208740  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:17.264017  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:17.264035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:17.280767  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:17.280783  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:17.347738  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:17.347749  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:17.347762  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:19.913786  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:19.923690  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:19.923756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:19.948485  313474 cri.go:89] found id: ""
	I1202 21:18:19.948499  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.948506  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:19.948512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:19.948572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:19.973040  313474 cri.go:89] found id: ""
	I1202 21:18:19.973054  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.973062  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:19.973067  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:19.973129  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:19.997059  313474 cri.go:89] found id: ""
	I1202 21:18:19.997073  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.997080  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:19.997086  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:19.997143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:20.023852  313474 cri.go:89] found id: ""
	I1202 21:18:20.023868  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.023876  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:20.023882  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:20.023963  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:20.050761  313474 cri.go:89] found id: ""
	I1202 21:18:20.050775  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.050782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:20.050788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:20.050849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:20.080281  313474 cri.go:89] found id: ""
	I1202 21:18:20.080299  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.080318  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:20.080324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:20.080396  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:20.104993  313474 cri.go:89] found id: ""
	I1202 21:18:20.105008  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.105015  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:20.105024  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:20.105035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:20.165434  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:20.165453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:20.181890  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:20.181907  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:20.248978  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:20.248989  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:20.249000  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:20.310960  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:20.310980  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:22.840884  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:22.851984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:22.852053  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:22.877753  313474 cri.go:89] found id: ""
	I1202 21:18:22.877766  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.877773  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:22.877779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:22.877837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:22.906410  313474 cri.go:89] found id: ""
	I1202 21:18:22.906424  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.906431  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:22.906437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:22.906500  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:22.930057  313474 cri.go:89] found id: ""
	I1202 21:18:22.930071  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.930077  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:22.930083  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:22.930143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:22.953434  313474 cri.go:89] found id: ""
	I1202 21:18:22.953447  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.953454  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:22.953460  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:22.953537  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:22.977521  313474 cri.go:89] found id: ""
	I1202 21:18:22.977534  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.977541  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:22.977546  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:22.977605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:23.002292  313474 cri.go:89] found id: ""
	I1202 21:18:23.002308  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.002316  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:23.002322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:23.002394  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:23.036373  313474 cri.go:89] found id: ""
	I1202 21:18:23.036387  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.036395  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:23.036403  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:23.036415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:23.095655  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:23.095673  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:23.111535  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:23.111553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:23.173705  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:23.173715  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:23.173726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:23.236268  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:23.236289  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:25.766078  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:25.775931  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:25.775992  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:25.803734  313474 cri.go:89] found id: ""
	I1202 21:18:25.803748  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.803755  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:25.803761  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:25.803819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:25.834986  313474 cri.go:89] found id: ""
	I1202 21:18:25.834998  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.835005  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:25.835011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:25.835067  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:25.868893  313474 cri.go:89] found id: ""
	I1202 21:18:25.868906  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.868914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:25.868919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:25.868978  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:25.893444  313474 cri.go:89] found id: ""
	I1202 21:18:25.893458  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.893465  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:25.893470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:25.893535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:25.920960  313474 cri.go:89] found id: ""
	I1202 21:18:25.920981  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.921016  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:25.921022  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:25.921084  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:25.945498  313474 cri.go:89] found id: ""
	I1202 21:18:25.945512  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.945519  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:25.945524  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:25.945584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:25.970324  313474 cri.go:89] found id: ""
	I1202 21:18:25.970338  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.970345  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:25.970352  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:25.970363  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:26.026110  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:26.026130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:26.042911  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:26.042929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:26.110842  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:26.110852  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:26.110863  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:26.172311  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:26.172331  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.700308  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:28.710060  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:28.710120  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:28.735161  313474 cri.go:89] found id: ""
	I1202 21:18:28.735174  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.735181  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:28.735186  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:28.735244  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:28.759111  313474 cri.go:89] found id: ""
	I1202 21:18:28.759125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.759132  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:28.759138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:28.759195  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:28.782985  313474 cri.go:89] found id: ""
	I1202 21:18:28.782999  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.783006  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:28.783011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:28.783069  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:28.820172  313474 cri.go:89] found id: ""
	I1202 21:18:28.820186  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.820203  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:28.820208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:28.820274  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:28.850833  313474 cri.go:89] found id: ""
	I1202 21:18:28.850846  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.850863  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:28.850869  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:28.850927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:28.882012  313474 cri.go:89] found id: ""
	I1202 21:18:28.882025  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.882032  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:28.882038  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:28.882093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:28.908111  313474 cri.go:89] found id: ""
	I1202 21:18:28.908125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.908132  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:28.908139  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:28.908150  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.934318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:28.934333  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:28.989499  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:28.989518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:29.007046  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:29.007064  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:29.083779  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:29.083789  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:29.083801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.646079  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:31.657486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:31.657549  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:31.683678  313474 cri.go:89] found id: ""
	I1202 21:18:31.683692  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.683699  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:31.683704  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:31.683759  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:31.712328  313474 cri.go:89] found id: ""
	I1202 21:18:31.712342  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.712349  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:31.712354  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:31.712410  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:31.736788  313474 cri.go:89] found id: ""
	I1202 21:18:31.736802  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.736808  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:31.736814  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:31.736870  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:31.761882  313474 cri.go:89] found id: ""
	I1202 21:18:31.761896  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.761903  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:31.761908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:31.761968  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:31.785756  313474 cri.go:89] found id: ""
	I1202 21:18:31.785770  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.785778  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:31.785783  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:31.785843  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:31.820411  313474 cri.go:89] found id: ""
	I1202 21:18:31.820424  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.820431  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:31.820437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:31.820493  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:31.853589  313474 cri.go:89] found id: ""
	I1202 21:18:31.853603  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.853611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:31.853619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:31.853630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:31.921797  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:31.921807  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:31.921818  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.983142  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:31.983161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:32.019032  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:32.019047  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:32.075826  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:32.075845  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.595298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:34.606306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:34.606370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:34.629306  313474 cri.go:89] found id: ""
	I1202 21:18:34.629321  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.629328  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:34.629334  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:34.629393  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:34.653285  313474 cri.go:89] found id: ""
	I1202 21:18:34.653299  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.653305  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:34.653311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:34.653369  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:34.679517  313474 cri.go:89] found id: ""
	I1202 21:18:34.679531  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.679538  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:34.679543  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:34.679601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:34.703382  313474 cri.go:89] found id: ""
	I1202 21:18:34.703395  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.703403  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:34.703409  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:34.703472  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:34.726696  313474 cri.go:89] found id: ""
	I1202 21:18:34.726710  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.726717  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:34.726723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:34.726784  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:34.751128  313474 cri.go:89] found id: ""
	I1202 21:18:34.751141  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.751148  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:34.751153  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:34.751213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:34.775011  313474 cri.go:89] found id: ""
	I1202 21:18:34.775025  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.775032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:34.775047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:34.775057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:34.835694  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:34.835712  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.852614  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:34.852628  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:34.915032  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:34.915042  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:34.915053  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:34.976914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:34.976933  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.512733  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:37.523297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:37.523360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:37.547452  313474 cri.go:89] found id: ""
	I1202 21:18:37.547471  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.547478  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:37.547484  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:37.547553  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:37.573439  313474 cri.go:89] found id: ""
	I1202 21:18:37.573453  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.573460  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:37.573471  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:37.573529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:37.597566  313474 cri.go:89] found id: ""
	I1202 21:18:37.597579  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.597586  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:37.597593  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:37.597689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:37.622743  313474 cri.go:89] found id: ""
	I1202 21:18:37.622757  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.622764  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:37.622769  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:37.622833  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:37.650998  313474 cri.go:89] found id: ""
	I1202 21:18:37.651012  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.651019  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:37.651024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:37.651082  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:37.675113  313474 cri.go:89] found id: ""
	I1202 21:18:37.675126  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.675133  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:37.675139  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:37.675198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:37.703998  313474 cri.go:89] found id: ""
	I1202 21:18:37.704011  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.704019  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:37.704028  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:37.704039  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.731894  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:37.731909  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:37.789286  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:37.789304  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:37.806026  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:37.806041  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:37.883651  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:37.883661  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:37.883672  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.449584  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:40.459754  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:40.459815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:40.484277  313474 cri.go:89] found id: ""
	I1202 21:18:40.484290  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.484297  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:40.484303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:40.484363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:40.512957  313474 cri.go:89] found id: ""
	I1202 21:18:40.512971  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.512978  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:40.512984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:40.513043  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:40.539344  313474 cri.go:89] found id: ""
	I1202 21:18:40.539357  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.539365  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:40.539371  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:40.539439  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:40.569762  313474 cri.go:89] found id: ""
	I1202 21:18:40.569776  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.569783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:40.569789  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:40.569865  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:40.599530  313474 cri.go:89] found id: ""
	I1202 21:18:40.599589  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.599597  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:40.599603  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:40.599663  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:40.624508  313474 cri.go:89] found id: ""
	I1202 21:18:40.624521  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.624527  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:40.624533  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:40.624590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:40.654772  313474 cri.go:89] found id: ""
	I1202 21:18:40.654786  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.654793  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:40.654800  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:40.654811  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:40.671128  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:40.671146  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:40.739442  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:40.739452  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:40.739465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.802579  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:40.802600  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:40.842887  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:40.842905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.407132  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:43.417207  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:43.417283  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:43.445187  313474 cri.go:89] found id: ""
	I1202 21:18:43.445201  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.445208  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:43.445214  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:43.445270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:43.469935  313474 cri.go:89] found id: ""
	I1202 21:18:43.469949  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.469957  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:43.469962  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:43.470021  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:43.495370  313474 cri.go:89] found id: ""
	I1202 21:18:43.495383  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.495391  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:43.495396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:43.495454  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:43.519120  313474 cri.go:89] found id: ""
	I1202 21:18:43.519133  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.519149  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:43.519155  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:43.519213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:43.548201  313474 cri.go:89] found id: ""
	I1202 21:18:43.548216  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.548223  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:43.548228  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:43.548290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:43.573077  313474 cri.go:89] found id: ""
	I1202 21:18:43.573091  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.573099  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:43.573104  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:43.573166  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:43.598032  313474 cri.go:89] found id: ""
	I1202 21:18:43.598046  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.598053  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:43.598062  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:43.598072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:43.625764  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:43.625780  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.681770  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:43.681787  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:43.698012  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:43.698028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:43.764049  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:43.764060  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:43.764071  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.332493  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:46.342812  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:46.342877  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:46.367988  313474 cri.go:89] found id: ""
	I1202 21:18:46.368002  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.368018  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:46.368024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:46.368091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:46.392483  313474 cri.go:89] found id: ""
	I1202 21:18:46.392496  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.392512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:46.392518  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:46.392574  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:46.429495  313474 cri.go:89] found id: ""
	I1202 21:18:46.429514  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.429522  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:46.429527  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:46.429598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:46.455204  313474 cri.go:89] found id: ""
	I1202 21:18:46.455218  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.455225  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:46.455231  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:46.455295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:46.479783  313474 cri.go:89] found id: ""
	I1202 21:18:46.479800  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.479808  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:46.479813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:46.479880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:46.504674  313474 cri.go:89] found id: ""
	I1202 21:18:46.504688  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.504696  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:46.504701  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:46.504767  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:46.534919  313474 cri.go:89] found id: ""
	I1202 21:18:46.534933  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.534940  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:46.534948  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:46.534968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:46.591507  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:46.591526  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:46.607216  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:46.607233  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:46.672448  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:46.672459  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:46.672469  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.738404  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:46.738424  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.269367  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:49.279307  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:49.279370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:49.302419  313474 cri.go:89] found id: ""
	I1202 21:18:49.302432  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.302439  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:49.302445  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:49.302501  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:49.328004  313474 cri.go:89] found id: ""
	I1202 21:18:49.328018  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.328025  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:49.328030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:49.328088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:49.352661  313474 cri.go:89] found id: ""
	I1202 21:18:49.352675  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.352682  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:49.352687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:49.352746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:49.377363  313474 cri.go:89] found id: ""
	I1202 21:18:49.377376  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.377383  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:49.377389  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:49.377447  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:49.401369  313474 cri.go:89] found id: ""
	I1202 21:18:49.401383  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.401390  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:49.401396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:49.401461  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:49.425207  313474 cri.go:89] found id: ""
	I1202 21:18:49.425221  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.425228  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:49.425233  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:49.425295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:49.451589  313474 cri.go:89] found id: ""
	I1202 21:18:49.451604  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.451611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:49.451619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:49.451630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:49.513462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:49.513472  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:49.513482  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:49.575782  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:49.575801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.610890  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:49.610906  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:49.667106  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:49.667123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.184506  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:52.194827  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:52.194887  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:52.221289  313474 cri.go:89] found id: ""
	I1202 21:18:52.221303  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.221310  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:52.221315  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:52.221385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:52.247152  313474 cri.go:89] found id: ""
	I1202 21:18:52.247167  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.247174  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:52.247179  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:52.247240  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:52.270523  313474 cri.go:89] found id: ""
	I1202 21:18:52.270539  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.270545  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:52.270550  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:52.270610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:52.294232  313474 cri.go:89] found id: ""
	I1202 21:18:52.294246  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.294253  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:52.294259  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:52.294321  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:52.322550  313474 cri.go:89] found id: ""
	I1202 21:18:52.322563  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.322570  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:52.322576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:52.322635  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:52.350081  313474 cri.go:89] found id: ""
	I1202 21:18:52.350095  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.350103  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:52.350110  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:52.350171  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:52.373782  313474 cri.go:89] found id: ""
	I1202 21:18:52.373796  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.373817  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:52.373826  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:52.373836  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:52.429396  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:52.429415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.445303  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:52.445319  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:52.509061  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:52.509073  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:52.509087  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:52.572171  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:52.572191  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:55.105321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:55.115684  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:55.115746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:55.143285  313474 cri.go:89] found id: ""
	I1202 21:18:55.143301  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.143313  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:55.143319  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:55.143379  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:55.168631  313474 cri.go:89] found id: ""
	I1202 21:18:55.168645  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.168652  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:55.168658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:55.168718  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:55.194277  313474 cri.go:89] found id: ""
	I1202 21:18:55.194290  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.194297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:55.194303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:55.194361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:55.221594  313474 cri.go:89] found id: ""
	I1202 21:18:55.221607  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.221614  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:55.221620  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:55.221738  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:55.245639  313474 cri.go:89] found id: ""
	I1202 21:18:55.245684  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.245691  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:55.245697  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:55.245758  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:55.270064  313474 cri.go:89] found id: ""
	I1202 21:18:55.270078  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.270085  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:55.270091  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:55.270151  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:55.298494  313474 cri.go:89] found id: ""
	I1202 21:18:55.298508  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.298515  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:55.298524  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:55.298534  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:55.354337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:55.354358  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:55.371291  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:55.371306  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:55.441025  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:55.441036  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:55.441048  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:55.508470  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:55.508491  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:58.040648  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:58.052163  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:58.052231  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:58.082641  313474 cri.go:89] found id: ""
	I1202 21:18:58.082655  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.082663  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:58.082668  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:58.082727  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:58.109547  313474 cri.go:89] found id: ""
	I1202 21:18:58.109561  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.109579  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:58.109585  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:58.109687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:58.134886  313474 cri.go:89] found id: ""
	I1202 21:18:58.134900  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.134908  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:58.134913  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:58.134973  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:58.158535  313474 cri.go:89] found id: ""
	I1202 21:18:58.158549  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.158555  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:58.158561  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:58.158626  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:58.181483  313474 cri.go:89] found id: ""
	I1202 21:18:58.181498  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.181505  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:58.181510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:58.181567  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:58.207661  313474 cri.go:89] found id: ""
	I1202 21:18:58.207675  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.207682  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:58.207687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:58.207744  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:58.231079  313474 cri.go:89] found id: ""
	I1202 21:18:58.231092  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.231099  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:58.231107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:58.231117  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:58.286068  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:58.286086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:58.301966  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:58.301983  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:58.371817  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:58.371827  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:58.371838  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:58.434916  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:58.434935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:00.970468  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:00.981089  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:00.981161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:01.007840  313474 cri.go:89] found id: ""
	I1202 21:19:01.007855  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.007863  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:01.007868  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:01.007927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:01.032203  313474 cri.go:89] found id: ""
	I1202 21:19:01.032217  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.032224  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:01.032229  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:01.032300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:01.065098  313474 cri.go:89] found id: ""
	I1202 21:19:01.065111  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.065119  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:01.065124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:01.065186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:01.091481  313474 cri.go:89] found id: ""
	I1202 21:19:01.091495  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.091502  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:01.091508  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:01.091584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:01.119523  313474 cri.go:89] found id: ""
	I1202 21:19:01.119538  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.119546  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:01.119552  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:01.119617  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:01.145559  313474 cri.go:89] found id: ""
	I1202 21:19:01.145574  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.145584  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:01.145590  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:01.145699  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:01.171870  313474 cri.go:89] found id: ""
	I1202 21:19:01.171885  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.171892  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:01.171900  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:01.171929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:01.236730  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:01.236741  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:01.236752  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:01.298712  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:01.298731  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:01.327192  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:01.327213  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:01.382852  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:01.382869  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:03.899143  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:03.908997  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:03.909059  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:03.932688  313474 cri.go:89] found id: ""
	I1202 21:19:03.932701  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.932708  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:03.932714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:03.932773  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:03.957073  313474 cri.go:89] found id: ""
	I1202 21:19:03.957087  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.957095  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:03.957100  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:03.957161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:03.981206  313474 cri.go:89] found id: ""
	I1202 21:19:03.981219  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.981233  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:03.981239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:03.981301  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:04.008306  313474 cri.go:89] found id: ""
	I1202 21:19:04.008322  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.008329  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:04.008335  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:04.008401  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:04.033825  313474 cri.go:89] found id: ""
	I1202 21:19:04.033839  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.033847  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:04.033853  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:04.033912  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:04.062862  313474 cri.go:89] found id: ""
	I1202 21:19:04.062876  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.062883  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:04.062890  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:04.062957  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:04.098358  313474 cri.go:89] found id: ""
	I1202 21:19:04.098372  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.098379  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:04.098388  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:04.098398  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:04.160856  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:04.160874  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:04.176607  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:04.176625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:04.239202  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:04.239213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:04.239224  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:04.304570  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:04.304588  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:06.834974  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:06.846425  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:06.846496  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:06.874499  313474 cri.go:89] found id: ""
	I1202 21:19:06.874513  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.874520  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:06.874526  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:06.874585  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:06.899405  313474 cri.go:89] found id: ""
	I1202 21:19:06.899419  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.899426  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:06.899432  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:06.899490  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:06.927927  313474 cri.go:89] found id: ""
	I1202 21:19:06.927940  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.927947  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:06.927953  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:06.928017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:06.956416  313474 cri.go:89] found id: ""
	I1202 21:19:06.956430  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.956437  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:06.956443  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:06.956503  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:06.982016  313474 cri.go:89] found id: ""
	I1202 21:19:06.982030  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.982038  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:06.982043  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:06.982102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:07.008744  313474 cri.go:89] found id: ""
	I1202 21:19:07.008758  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.008765  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:07.008771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:07.008831  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:07.051903  313474 cri.go:89] found id: ""
	I1202 21:19:07.051917  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.051924  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:07.051933  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:07.051956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:07.111866  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:07.111885  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:07.131193  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:07.131212  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:07.197137  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:07.197148  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:07.197159  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:07.258783  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:07.258802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:09.784238  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:09.795790  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:09.795850  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:09.821880  313474 cri.go:89] found id: ""
	I1202 21:19:09.821894  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.821902  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:09.821907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:09.821970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:09.845564  313474 cri.go:89] found id: ""
	I1202 21:19:09.845579  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.845586  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:09.845617  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:09.845698  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:09.874848  313474 cri.go:89] found id: ""
	I1202 21:19:09.874862  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.874875  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:09.874880  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:09.874939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:09.899396  313474 cri.go:89] found id: ""
	I1202 21:19:09.899410  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.899417  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:09.899423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:09.899485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:09.928207  313474 cri.go:89] found id: ""
	I1202 21:19:09.928231  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.928291  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:09.928297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:09.928367  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:09.953363  313474 cri.go:89] found id: ""
	I1202 21:19:09.953386  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.953393  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:09.953400  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:09.953478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:09.977852  313474 cri.go:89] found id: ""
	I1202 21:19:09.977866  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.977873  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:09.977881  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:09.977891  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:10.035535  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:10.035554  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:10.053223  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:10.053240  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:10.129538  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:10.129549  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:10.129561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:10.196069  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:10.196089  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:12.729098  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:12.739162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:12.739221  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:12.762279  313474 cri.go:89] found id: ""
	I1202 21:19:12.762293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.762300  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:12.762305  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:12.762405  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:12.787279  313474 cri.go:89] found id: ""
	I1202 21:19:12.787293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.787300  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:12.787306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:12.787364  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:12.812545  313474 cri.go:89] found id: ""
	I1202 21:19:12.812558  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.812566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:12.812571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:12.812642  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:12.840741  313474 cri.go:89] found id: ""
	I1202 21:19:12.840755  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.840762  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:12.840767  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:12.840824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:12.868898  313474 cri.go:89] found id: ""
	I1202 21:19:12.868912  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.868919  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:12.868924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:12.868983  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:12.895296  313474 cri.go:89] found id: ""
	I1202 21:19:12.895310  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.895317  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:12.895322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:12.895382  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:12.918838  313474 cri.go:89] found id: ""
	I1202 21:19:12.918852  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.918859  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:12.918867  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:12.918880  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:12.989410  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:12.989434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:13.018849  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:13.018864  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:13.075957  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:13.075976  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:13.095483  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:13.095501  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:13.160629  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.660888  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:15.670559  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:15.670624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:15.693948  313474 cri.go:89] found id: ""
	I1202 21:19:15.693961  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.693969  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:15.693974  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:15.694041  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:15.720374  313474 cri.go:89] found id: ""
	I1202 21:19:15.720389  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.720396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:15.720401  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:15.720460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:15.745246  313474 cri.go:89] found id: ""
	I1202 21:19:15.745259  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.745267  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:15.745272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:15.745339  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:15.772221  313474 cri.go:89] found id: ""
	I1202 21:19:15.772234  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.772241  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:15.772247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:15.772317  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:15.795604  313474 cri.go:89] found id: ""
	I1202 21:19:15.795618  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.795624  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:15.795630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:15.795687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:15.824167  313474 cri.go:89] found id: ""
	I1202 21:19:15.824180  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.824187  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:15.824193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:15.824252  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:15.847367  313474 cri.go:89] found id: ""
	I1202 21:19:15.847380  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.847387  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:15.847396  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:15.847406  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:15.901801  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:15.901820  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:15.917208  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:15.917228  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:15.976565  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.976575  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:15.976586  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:16.041174  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:16.041192  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.580269  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:18.590169  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:18.590245  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:18.615027  313474 cri.go:89] found id: ""
	I1202 21:19:18.615042  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.615049  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:18.615055  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:18.615135  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:18.640491  313474 cri.go:89] found id: ""
	I1202 21:19:18.640505  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.640512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:18.640517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:18.640584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:18.665078  313474 cri.go:89] found id: ""
	I1202 21:19:18.665092  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.665099  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:18.665105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:18.665162  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:18.689844  313474 cri.go:89] found id: ""
	I1202 21:19:18.689858  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.689865  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:18.689871  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:18.689928  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:18.715165  313474 cri.go:89] found id: ""
	I1202 21:19:18.715179  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.715186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:18.715191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:18.715250  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:18.740098  313474 cri.go:89] found id: ""
	I1202 21:19:18.740111  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.740118  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:18.740124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:18.740181  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:18.764406  313474 cri.go:89] found id: ""
	I1202 21:19:18.764420  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.764427  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:18.764435  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:18.764448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.795780  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:18.795801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:18.851180  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:18.851199  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:18.867072  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:18.867088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:18.932904  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:18.932917  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:18.932930  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.499766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:21.511750  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:21.511824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:21.541598  313474 cri.go:89] found id: ""
	I1202 21:19:21.541612  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.541619  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:21.541624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:21.541710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:21.565690  313474 cri.go:89] found id: ""
	I1202 21:19:21.565705  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.565712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:21.565717  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:21.565786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:21.588975  313474 cri.go:89] found id: ""
	I1202 21:19:21.588989  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.588996  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:21.589002  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:21.589060  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:21.616075  313474 cri.go:89] found id: ""
	I1202 21:19:21.616100  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.616108  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:21.616114  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:21.616189  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:21.640380  313474 cri.go:89] found id: ""
	I1202 21:19:21.640393  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.640410  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:21.640416  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:21.640473  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:21.664881  313474 cri.go:89] found id: ""
	I1202 21:19:21.664895  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.664912  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:21.664919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:21.664976  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:21.688940  313474 cri.go:89] found id: ""
	I1202 21:19:21.688961  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.688968  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:21.688976  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:21.688986  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:21.747031  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:21.747050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:21.762969  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:21.762988  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:21.829106  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:21.829117  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:21.829142  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.890717  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:21.890735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:24.418721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:24.428805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:24.428867  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:24.454807  313474 cri.go:89] found id: ""
	I1202 21:19:24.454820  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.454827  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:24.454844  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:24.454905  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:24.479376  313474 cri.go:89] found id: ""
	I1202 21:19:24.479390  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.479396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:24.479402  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:24.479459  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:24.504161  313474 cri.go:89] found id: ""
	I1202 21:19:24.504174  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.504181  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:24.504195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:24.504257  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:24.529438  313474 cri.go:89] found id: ""
	I1202 21:19:24.529452  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.529460  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:24.529466  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:24.529540  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:24.554237  313474 cri.go:89] found id: ""
	I1202 21:19:24.554251  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.554258  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:24.554264  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:24.554322  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:24.583978  313474 cri.go:89] found id: ""
	I1202 21:19:24.583992  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.583999  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:24.584005  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:24.584071  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:24.608672  313474 cri.go:89] found id: ""
	I1202 21:19:24.608686  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.608694  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:24.608702  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:24.608711  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:24.663382  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:24.663399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:24.678935  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:24.678953  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:24.741560  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:24.741571  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:24.741584  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:24.805991  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:24.806014  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:27.332486  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:27.343923  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:27.343980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:27.370846  313474 cri.go:89] found id: ""
	I1202 21:19:27.370862  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.370869  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:27.370874  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:27.370933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:27.394765  313474 cri.go:89] found id: ""
	I1202 21:19:27.394779  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.394786  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:27.394791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:27.394858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:27.418228  313474 cri.go:89] found id: ""
	I1202 21:19:27.418241  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.418248  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:27.418254  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:27.418312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:27.442428  313474 cri.go:89] found id: ""
	I1202 21:19:27.442441  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.442448  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:27.442454  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:27.442516  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:27.467409  313474 cri.go:89] found id: ""
	I1202 21:19:27.467423  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.467430  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:27.467435  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:27.467492  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:27.490186  313474 cri.go:89] found id: ""
	I1202 21:19:27.490200  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.490207  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:27.490213  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:27.490270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:27.515032  313474 cri.go:89] found id: ""
	I1202 21:19:27.515046  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.515054  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:27.515062  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:27.515072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:27.570118  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:27.570137  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:27.585958  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:27.585974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:27.649259  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:27.649269  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:27.649288  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:27.711120  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:27.711140  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:30.243770  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:30.255318  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:30.255385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:30.279952  313474 cri.go:89] found id: ""
	I1202 21:19:30.279966  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.279974  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:30.279979  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:30.280039  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:30.320036  313474 cri.go:89] found id: ""
	I1202 21:19:30.320049  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.320056  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:30.320061  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:30.320119  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:30.351365  313474 cri.go:89] found id: ""
	I1202 21:19:30.351378  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.351385  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:30.351391  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:30.351449  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:30.378208  313474 cri.go:89] found id: ""
	I1202 21:19:30.378221  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.378228  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:30.378234  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:30.378293  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:30.404248  313474 cri.go:89] found id: ""
	I1202 21:19:30.404262  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.404268  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:30.404274  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:30.404331  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:30.428678  313474 cri.go:89] found id: ""
	I1202 21:19:30.428691  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.428698  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:30.428714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:30.428786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:30.452008  313474 cri.go:89] found id: ""
	I1202 21:19:30.452021  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.452039  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:30.452047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:30.452057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:30.506509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:30.506530  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:30.522444  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:30.522464  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:30.585091  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:30.585102  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:30.585112  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:30.649461  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:30.649484  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:33.184340  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:33.195406  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:33.195468  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:33.220999  313474 cri.go:89] found id: ""
	I1202 21:19:33.221013  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.221020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:33.221026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:33.221087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:33.245046  313474 cri.go:89] found id: ""
	I1202 21:19:33.245060  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.245068  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:33.245073  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:33.245134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:33.268397  313474 cri.go:89] found id: ""
	I1202 21:19:33.268410  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.268417  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:33.268423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:33.268485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:33.304556  313474 cri.go:89] found id: ""
	I1202 21:19:33.304569  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.304577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:33.304582  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:33.304643  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:33.335992  313474 cri.go:89] found id: ""
	I1202 21:19:33.336006  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.336013  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:33.336019  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:33.336086  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:33.367967  313474 cri.go:89] found id: ""
	I1202 21:19:33.367980  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.367989  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:33.367995  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:33.368052  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:33.393839  313474 cri.go:89] found id: ""
	I1202 21:19:33.393853  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.393860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:33.393867  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:33.393877  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:33.448875  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:33.448894  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:33.464807  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:33.464822  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:33.531228  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:33.531238  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:33.531248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:33.592933  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:33.592951  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:36.121943  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:36.132447  313474 kubeadm.go:602] duration metric: took 4m4.151661323s to restartPrimaryControlPlane
	W1202 21:19:36.132510  313474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:19:36.132588  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:19:36.539188  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:19:36.552660  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:19:36.560203  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:19:36.560257  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:19:36.567605  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:19:36.567615  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:19:36.567669  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:19:36.575238  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:19:36.575292  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:19:36.582200  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:19:36.589483  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:19:36.589539  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:19:36.596652  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.604117  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:19:36.604180  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.611312  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:19:36.619074  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:19:36.619140  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:19:36.626580  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:19:36.665764  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:19:36.665850  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:19:36.739165  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:19:36.739244  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:19:36.739289  313474 kubeadm.go:319] OS: Linux
	I1202 21:19:36.739345  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:19:36.739401  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:19:36.739460  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:19:36.739515  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:19:36.739574  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:19:36.739631  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:19:36.739681  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:19:36.739743  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:19:36.739800  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:19:36.802641  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:19:36.802776  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:19:36.802889  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:19:36.810139  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:19:36.815519  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:19:36.815612  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:19:36.815684  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:19:36.815766  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:19:36.815832  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:19:36.815906  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:19:36.815965  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:19:36.816035  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:19:36.816096  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:19:36.816180  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:19:36.816258  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:19:36.816301  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:19:36.816363  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:19:36.979466  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:19:37.030688  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:19:37.178864  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:19:37.287458  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:19:37.759486  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:19:37.759977  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:19:37.764136  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:19:37.767507  313474 out.go:252]   - Booting up control plane ...
	I1202 21:19:37.767615  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:19:37.767697  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:19:37.768187  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:19:37.789119  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:19:37.789389  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:19:37.796801  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:19:37.797075  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:19:37.797116  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:19:37.935526  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:19:37.935655  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:23:37.935181  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000055683s
	I1202 21:23:37.935206  313474 kubeadm.go:319] 
	I1202 21:23:37.935262  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:23:37.935294  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:23:37.935397  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:23:37.935402  313474 kubeadm.go:319] 
	I1202 21:23:37.935505  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:23:37.935535  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:23:37.935565  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:23:37.935567  313474 kubeadm.go:319] 
	I1202 21:23:37.939509  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:23:37.940015  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:23:37.940174  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:23:37.940488  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:23:37.940494  313474 kubeadm.go:319] 
	I1202 21:23:37.940592  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:23:37.940735  313474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000055683s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:23:37.940819  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:23:38.352160  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:23:38.364903  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:23:38.364957  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:23:38.373626  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:23:38.373635  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:23:38.373703  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:23:38.380912  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:23:38.380966  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:23:38.387986  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:23:38.395511  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:23:38.395567  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:23:38.403067  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.410435  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:23:38.410491  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.417648  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:23:38.425411  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:23:38.425466  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:23:38.432690  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:23:38.469901  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:23:38.470170  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:23:38.543545  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:23:38.543611  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:23:38.543646  313474 kubeadm.go:319] OS: Linux
	I1202 21:23:38.543689  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:23:38.543736  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:23:38.543782  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:23:38.543829  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:23:38.543876  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:23:38.543922  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:23:38.543966  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:23:38.544013  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:23:38.544058  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:23:38.612266  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:23:38.612377  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:23:38.612479  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:23:38.617939  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:23:38.623176  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:23:38.623272  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:23:38.623347  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:23:38.623429  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:23:38.623494  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:23:38.623569  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:23:38.623628  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:23:38.623699  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:23:38.623765  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:23:38.623849  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:23:38.623933  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:23:38.623979  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:23:38.624034  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:23:39.195644  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:23:40.418759  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:23:40.662567  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:23:41.331428  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:23:41.582387  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:23:41.582932  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:23:41.585414  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:23:41.588388  313474 out.go:252]   - Booting up control plane ...
	I1202 21:23:41.588487  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:23:41.588564  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:23:41.588629  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:23:41.609723  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:23:41.609836  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:23:41.617428  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:23:41.617997  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:23:41.618040  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:23:41.754122  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:23:41.754238  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:27:41.753164  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001114938s
	I1202 21:27:41.753189  313474 kubeadm.go:319] 
	I1202 21:27:41.753242  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:27:41.753272  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:27:41.753369  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:27:41.753373  313474 kubeadm.go:319] 
	I1202 21:27:41.753470  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:27:41.753499  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:27:41.753527  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:27:41.753530  313474 kubeadm.go:319] 
	I1202 21:27:41.757163  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:27:41.757586  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:27:41.757709  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:27:41.757943  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:27:41.757948  313474 kubeadm.go:319] 
	I1202 21:27:41.758016  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:27:41.758065  313474 kubeadm.go:403] duration metric: took 12m9.810714629s to StartCluster
	I1202 21:27:41.758097  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:27:41.758157  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:27:41.783479  313474 cri.go:89] found id: ""
	I1202 21:27:41.783492  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.783500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:27:41.783505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:27:41.783577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:27:41.814610  313474 cri.go:89] found id: ""
	I1202 21:27:41.814624  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.814631  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:27:41.814644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:27:41.814702  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:27:41.844545  313474 cri.go:89] found id: ""
	I1202 21:27:41.844559  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.844566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:27:41.844571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:27:41.844630  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:27:41.876235  313474 cri.go:89] found id: ""
	I1202 21:27:41.876250  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.876257  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:27:41.876262  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:27:41.876320  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:27:41.899944  313474 cri.go:89] found id: ""
	I1202 21:27:41.899957  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.899964  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:27:41.899969  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:27:41.900027  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:27:41.924640  313474 cri.go:89] found id: ""
	I1202 21:27:41.924653  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.924660  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:27:41.924666  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:27:41.924723  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:27:41.951344  313474 cri.go:89] found id: ""
	I1202 21:27:41.951358  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.951365  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:27:41.951373  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:27:41.951383  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:27:42.009004  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:27:42.009028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:27:42.033968  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:27:42.033989  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:27:42.114849  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:27:42.114863  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:27:42.114875  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:27:42.193571  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:27:42.193593  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:27:42.259231  313474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:27:42.259270  313474 out.go:285] * 
	W1202 21:27:42.259601  313474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.259616  313474 out.go:285] * 
	W1202 21:27:42.262291  313474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:27:42.269405  313474 out.go:203] 
	W1202 21:27:42.272139  313474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.272287  313474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:27:42.272371  313474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:27:42.276351  313474 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609401010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609411414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609426076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609435913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609447105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609462194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609496941Z" level=info msg="runtime interface created"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609503784Z" level=info msg="created NRI interface"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609513794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609548107Z" level=info msg="Connect containerd service"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609923390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.610459300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.628985739Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.629232566Z" level=info msg="Start recovering state"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630271509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630432538Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655348240Z" level=info msg="Start event monitor"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655522692Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655586969Z" level=info msg="Start streaming server"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655657638Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655717968Z" level=info msg="runtime interface starting up..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655774631Z" level=info msg="starting plugins..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655837464Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:15:30 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.657496581Z" level=info msg="containerd successfully booted in 0.074787s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:43.494426   21592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:43.494938   21592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:43.497358   21592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:43.498669   21592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:43.499479   21592 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:27:43 up  3:10,  0 user,  load average: 0.16, 0.21, 0.52
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:27:40 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 21:27:41 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:41 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:41 functional-753958 kubelet[21395]: E1202 21:27:41.086272   21395 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 21:27:41 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:41 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:41 functional-753958 kubelet[21410]: E1202 21:27:41.867439   21410 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:41 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:42 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 21:27:42 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:42 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:42 functional-753958 kubelet[21492]: E1202 21:27:42.601792   21492 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:42 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:42 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 02 21:27:43 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:43 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:43 functional-753958 kubelet[21555]: E1202 21:27:43.349766   21555 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (369.007924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (736.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-753958 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-753958 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (62.789137ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-753958 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (321.044468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 logs -n 25: (1.148470463s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446665 image ls --format short --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format json --alsologtostderr                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls --format table --alsologtostderr                                                                                             │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ ssh     │ functional-446665 ssh pgrep buildkitd                                                                                                                   │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ image   │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr                                                  │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ image   │ functional-446665 image ls                                                                                                                              │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ delete  │ -p functional-446665                                                                                                                                    │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
	│ start   │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │                     │
	│ start   │ -p functional-753958 --alsologtostderr -v=8                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:09 UTC │                     │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.1                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:3.3                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add registry.k8s.io/pause:latest                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache add minikube-local-cache-test:functional-753958                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ functional-753958 cache delete minikube-local-cache-test:functional-753958                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ list                                                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl images                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ cache   │ functional-753958 cache reload                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ kubectl │ functional-753958 kubectl -- --context functional-753958 get pods                                                                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ start   │ -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:15:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:15:27.807151  313474 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:15:27.807260  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807264  313474 out.go:374] Setting ErrFile to fd 2...
	I1202 21:15:27.807268  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807610  313474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:15:27.808015  313474 out.go:368] Setting JSON to false
	I1202 21:15:27.809366  313474 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10666,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:15:27.809431  313474 start.go:143] virtualization:  
	I1202 21:15:27.812823  313474 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:15:27.815796  313474 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:15:27.816009  313474 notify.go:221] Checking for updates...
	I1202 21:15:27.821378  313474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:15:27.824158  313474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:15:27.826979  313474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:15:27.829780  313474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:15:27.832616  313474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:15:27.835951  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:27.836043  313474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:15:27.868236  313474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:15:27.868329  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.931411  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.921542243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.931507  313474 docker.go:319] overlay module found
	I1202 21:15:27.934670  313474 out.go:179] * Using the docker driver based on existing profile
	I1202 21:15:27.937620  313474 start.go:309] selected driver: docker
	I1202 21:15:27.937631  313474 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.937764  313474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:15:27.937862  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.995269  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.986382161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.995660  313474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:15:27.995688  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:27.995745  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:27.995788  313474 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.998840  313474 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:15:28.001915  313474 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:15:28.005631  313474 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:15:28.008845  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:28.008946  313474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:15:28.029517  313474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:15:28.029530  313474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:15:28.078709  313474 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:15:28.277463  313474 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:15:28.277635  313474 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:15:28.277718  313474 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277817  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:15:28.277826  313474 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.54µs
	I1202 21:15:28.277840  313474 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:15:28.277851  313474 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277891  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:15:28.277896  313474 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.374µs
	I1202 21:15:28.277901  313474 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277913  313474 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:15:28.277910  313474 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277949  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:15:28.277954  313474 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.659µs
	I1202 21:15:28.277951  313474 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277959  313474 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277969  313474 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277991  313474 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-753958"
	I1202 21:15:28.277998  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:15:28.278004  313474 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:15:28.278003  313474 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.797µs
	I1202 21:15:28.278008  313474 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278008  313474 fix.go:54] fixHost starting: 
	I1202 21:15:28.278015  313474 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278051  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:15:28.278067  313474 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.63µs
	I1202 21:15:28.278075  313474 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278084  313474 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278133  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:15:28.278144  313474 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 58.148µs
	I1202 21:15:28.278154  313474 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:15:28.278163  313474 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278201  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:15:28.278206  313474 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 44.323µs
	I1202 21:15:28.278211  313474 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:15:28.278227  313474 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278272  313474 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:15:28.278274  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:15:28.278279  313474 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 53.693µs
	I1202 21:15:28.278284  313474 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:15:28.278293  313474 cache.go:87] Successfully saved all images to host disk.
	I1202 21:15:28.303149  313474 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:15:28.303168  313474 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:15:28.306592  313474 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:15:28.306620  313474 machine.go:94] provisionDockerMachine start ...
	I1202 21:15:28.306711  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.331641  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.331992  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.331999  313474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:15:28.485262  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.485277  313474 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:15:28.485346  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.502136  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.502454  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.502463  313474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:15:28.662872  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.662941  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.680996  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.681283  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.681296  313474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:15:28.829833  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:15:28.829849  313474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:15:28.829870  313474 ubuntu.go:190] setting up certificates
	I1202 21:15:28.829878  313474 provision.go:84] configureAuth start
	I1202 21:15:28.829936  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:28.847119  313474 provision.go:143] copyHostCerts
	I1202 21:15:28.847182  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:15:28.847194  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:15:28.847267  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:15:28.847367  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:15:28.847372  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:15:28.847403  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:15:28.847459  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:15:28.847462  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:15:28.847485  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:15:28.847574  313474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:15:28.960674  313474 provision.go:177] copyRemoteCerts
	I1202 21:15:28.960733  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:15:28.960772  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.978043  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.081719  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:15:29.105765  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:15:29.122371  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:15:29.139343  313474 provision.go:87] duration metric: took 309.452187ms to configureAuth
	I1202 21:15:29.139359  313474 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:15:29.139545  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:29.139550  313474 machine.go:97] duration metric: took 832.92543ms to provisionDockerMachine
	I1202 21:15:29.139557  313474 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:15:29.139567  313474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:15:29.139623  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:15:29.139660  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.156608  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.261796  313474 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:15:29.265154  313474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:15:29.265170  313474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:15:29.265181  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:15:29.265234  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:15:29.265309  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:15:29.265381  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:15:29.265422  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:15:29.272853  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:29.290463  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:15:29.307373  313474 start.go:296] duration metric: took 167.802474ms for postStartSetup
	I1202 21:15:29.307459  313474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:15:29.307497  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.324791  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.426726  313474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:15:29.431481  313474 fix.go:56] duration metric: took 1.153466989s for fixHost
	I1202 21:15:29.431495  313474 start.go:83] releasing machines lock for "functional-753958", held for 1.153497537s
	I1202 21:15:29.431566  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:29.447801  313474 ssh_runner.go:195] Run: cat /version.json
	I1202 21:15:29.447846  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.447885  313474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:15:29.447935  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.467421  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.471596  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.659911  313474 ssh_runner.go:195] Run: systemctl --version
	I1202 21:15:29.666244  313474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:15:29.670444  313474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:15:29.670514  313474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:15:29.678098  313474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:15:29.678112  313474 start.go:496] detecting cgroup driver to use...
	I1202 21:15:29.678141  313474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:15:29.678186  313474 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:15:29.694041  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:15:29.710665  313474 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:15:29.710716  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:15:29.728421  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:15:29.743568  313474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:15:29.860902  313474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:15:29.989688  313474 docker.go:234] disabling docker service ...
	I1202 21:15:29.989770  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:15:30.008558  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:15:30.033480  313474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:15:30.168415  313474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:15:30.289508  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:15:30.302465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:15:30.316926  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:15:30.325512  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:15:30.334372  313474 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:15:30.334439  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:15:30.343106  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.351679  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:15:30.359860  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.368460  313474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:15:30.376324  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:15:30.384579  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:15:30.393108  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:15:30.401480  313474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:15:30.408867  313474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:15:30.415924  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.533792  313474 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:15:30.657833  313474 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:15:30.657894  313474 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:15:30.661737  313474 start.go:564] Will wait 60s for crictl version
	I1202 21:15:30.661805  313474 ssh_runner.go:195] Run: which crictl
	I1202 21:15:30.665271  313474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:15:30.691831  313474 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:15:30.691893  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.710586  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.734130  313474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:15:30.737177  313474 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:15:30.753095  313474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:15:30.760367  313474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:15:30.763216  313474 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:15:30.763354  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:30.763426  313474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:15:30.788120  313474 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:15:30.788132  313474 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:15:30.788138  313474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:15:30.788245  313474 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:15:30.788311  313474 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:15:30.816149  313474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:15:30.816166  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:30.816175  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:30.816190  313474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:15:30.816220  313474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:15:30.816350  313474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:15:30.816417  313474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:15:30.824592  313474 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:15:30.824650  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:15:30.832172  313474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:15:30.844549  313474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:15:30.856965  313474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1202 21:15:30.869111  313474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:15:30.872973  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.993888  313474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:15:31.292555  313474 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:15:31.292567  313474 certs.go:195] generating shared ca certs ...
	I1202 21:15:31.292581  313474 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:15:31.292714  313474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:15:31.292766  313474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:15:31.292772  313474 certs.go:257] generating profile certs ...
	I1202 21:15:31.292864  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:15:31.292921  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:15:31.292963  313474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:15:31.293076  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:15:31.293105  313474 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:15:31.293112  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:15:31.293138  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:15:31.293160  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:15:31.293184  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:15:31.293230  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:31.293875  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:15:31.313092  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:15:31.332062  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:15:31.351302  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:15:31.370658  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:15:31.387720  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:15:31.405248  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:15:31.422664  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:15:31.440135  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:15:31.457687  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:15:31.475495  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:15:31.492183  313474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:15:31.504166  313474 ssh_runner.go:195] Run: openssl version
	I1202 21:15:31.510525  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:15:31.518840  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522541  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522596  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.563265  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:15:31.571112  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:15:31.579437  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583195  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583250  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.628890  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:15:31.636777  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:15:31.644711  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648206  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648271  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.689010  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:15:31.696812  313474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:15:31.700482  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:15:31.740999  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:15:31.782731  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:15:31.823250  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:15:31.865611  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:15:31.906492  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:15:31.947359  313474 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:31.947441  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:15:31.947511  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:31.973182  313474 cri.go:89] found id: ""
	I1202 21:15:31.973243  313474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:15:31.980768  313474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:15:31.980777  313474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:15:31.980838  313474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:15:31.988019  313474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:31.988518  313474 kubeconfig.go:125] found "functional-753958" server: "https://192.168.49.2:8441"
	I1202 21:15:31.989827  313474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:15:31.997696  313474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:00:56.754776837 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:15:30.864977782 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:15:31.997711  313474 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:15:31.997724  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1202 21:15:31.997791  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:32.028400  313474 cri.go:89] found id: ""
	I1202 21:15:32.028460  313474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:15:32.046252  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:15:32.054174  313474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 21:05 /etc/kubernetes/scheduler.conf
	
	I1202 21:15:32.054235  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:15:32.061845  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:15:32.069217  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.069283  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:15:32.076901  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.084278  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.084333  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.091360  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:15:32.098582  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.098635  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:15:32.105786  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:15:32.113101  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:32.157271  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.778908  313474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.621612732s)
	I1202 21:15:33.778983  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.980110  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.046494  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.096642  313474 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:15:34.096721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:34.596907  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.097723  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.597306  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.096830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.096902  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.597594  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.097418  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.596863  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.096945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.596885  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.097285  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.597766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.097086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.597610  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.097762  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.597458  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.097372  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.596919  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.096844  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.597785  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.097138  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.597877  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.096835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.596922  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.097709  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.597777  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.097634  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.597037  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.097698  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.597298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.097150  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.596854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.097637  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.097490  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.597734  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.097878  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.597585  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.097045  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.596935  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.096967  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.597277  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.097741  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.597498  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.097835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.596980  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.097825  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.597397  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.097737  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.597771  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.097000  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.096857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.596807  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.096858  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.596921  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.097782  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.597168  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.097826  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.597834  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.597015  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.097323  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.596890  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.096868  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.597441  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.097848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.596805  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.096809  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.597086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.097186  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.597613  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.096962  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.596871  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.097854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.596857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.096839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.596917  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.097213  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.596830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.097886  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.597752  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.096793  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.597667  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.096901  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.597296  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.097838  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.597565  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.097476  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.597700  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.597010  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.097503  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.596848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.096818  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.596913  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.097537  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.596855  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.096911  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.596909  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.097013  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.596904  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.097839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.596939  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.097272  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.597856  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.097301  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.596953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.096893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.597192  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.097860  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.597517  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.097502  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.597497  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.097081  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.597504  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.097354  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:34.097219  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:34.097318  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:34.124123  313474 cri.go:89] found id: ""
	I1202 21:16:34.124137  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.124144  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:34.124150  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:34.124209  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:34.149042  313474 cri.go:89] found id: ""
	I1202 21:16:34.149056  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.149063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:34.149069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:34.149127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:34.172796  313474 cri.go:89] found id: ""
	I1202 21:16:34.172810  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.172817  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:34.172823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:34.172888  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:34.199775  313474 cri.go:89] found id: ""
	I1202 21:16:34.199789  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.199796  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:34.199801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:34.199858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:34.223410  313474 cri.go:89] found id: ""
	I1202 21:16:34.223424  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.223431  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:34.223436  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:34.223542  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:34.248663  313474 cri.go:89] found id: ""
	I1202 21:16:34.248677  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.248683  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:34.248689  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:34.248747  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:34.272612  313474 cri.go:89] found id: ""
	I1202 21:16:34.272626  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.272633  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:34.272641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:34.272650  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:34.304889  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:34.304905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:34.363275  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:34.363294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:34.379039  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:34.379054  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:34.446716  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:34.446728  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:34.446739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.010773  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:37.023010  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:37.023081  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:37.074765  313474 cri.go:89] found id: ""
	I1202 21:16:37.074779  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.074786  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:37.074791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:37.074849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:37.105604  313474 cri.go:89] found id: ""
	I1202 21:16:37.105617  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.105624  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:37.105630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:37.105731  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:37.135381  313474 cri.go:89] found id: ""
	I1202 21:16:37.135395  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.135402  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:37.135407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:37.135465  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:37.159378  313474 cri.go:89] found id: ""
	I1202 21:16:37.159391  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.159398  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:37.159404  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:37.159460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:37.184079  313474 cri.go:89] found id: ""
	I1202 21:16:37.184093  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.184100  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:37.184105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:37.184266  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:37.208512  313474 cri.go:89] found id: ""
	I1202 21:16:37.208526  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.208533  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:37.208539  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:37.208598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:37.231722  313474 cri.go:89] found id: ""
	I1202 21:16:37.231735  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.231742  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:37.231750  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:37.231760  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:37.247154  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:37.247171  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:37.311439  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:37.311449  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:37.311459  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.374896  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:37.374916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:37.402545  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:37.402561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:39.959953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:39.969383  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:39.969445  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:39.998437  313474 cri.go:89] found id: ""
	I1202 21:16:39.998450  313474 logs.go:282] 0 containers: []
	W1202 21:16:39.998457  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:39.998463  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:39.998519  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:40.079783  313474 cri.go:89] found id: ""
	I1202 21:16:40.079799  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.079807  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:40.079813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:40.079882  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:40.112177  313474 cri.go:89] found id: ""
	I1202 21:16:40.112203  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.112210  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:40.112217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:40.112289  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:40.148805  313474 cri.go:89] found id: ""
	I1202 21:16:40.148820  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.148828  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:40.148834  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:40.148918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:40.180826  313474 cri.go:89] found id: ""
	I1202 21:16:40.180841  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.180848  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:40.180855  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:40.180930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:40.209004  313474 cri.go:89] found id: ""
	I1202 21:16:40.209018  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.209025  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:40.209032  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:40.209091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:40.234748  313474 cri.go:89] found id: ""
	I1202 21:16:40.234762  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.234769  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:40.234778  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:40.234788  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:40.297246  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:40.297257  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:40.297268  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:40.359276  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:40.359297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:40.389165  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:40.389181  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:40.447977  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:40.447997  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:42.964946  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:42.974927  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:42.974987  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:42.997720  313474 cri.go:89] found id: ""
	I1202 21:16:42.997734  313474 logs.go:282] 0 containers: []
	W1202 21:16:42.997741  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:42.997747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:42.997808  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:43.022947  313474 cri.go:89] found id: ""
	I1202 21:16:43.022961  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.022968  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:43.022973  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:43.023034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:43.053855  313474 cri.go:89] found id: ""
	I1202 21:16:43.053869  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.053876  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:43.053881  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:43.053941  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:43.086462  313474 cri.go:89] found id: ""
	I1202 21:16:43.086475  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.086482  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:43.086487  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:43.086545  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:43.112776  313474 cri.go:89] found id: ""
	I1202 21:16:43.112790  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.112798  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:43.112803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:43.112861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:43.137549  313474 cri.go:89] found id: ""
	I1202 21:16:43.137563  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.137570  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:43.137576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:43.137695  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:43.161710  313474 cri.go:89] found id: ""
	I1202 21:16:43.161724  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.161731  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:43.161739  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:43.161751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:43.217891  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:43.217910  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:43.233516  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:43.233539  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:43.295127  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:43.295145  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:43.295157  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:43.361614  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:43.361638  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:45.891122  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:45.901162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:45.901219  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:45.924968  313474 cri.go:89] found id: ""
	I1202 21:16:45.924982  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.924989  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:45.924994  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:45.925064  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:45.960327  313474 cri.go:89] found id: ""
	I1202 21:16:45.960350  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.960357  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:45.960362  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:45.960428  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:45.988303  313474 cri.go:89] found id: ""
	I1202 21:16:45.988317  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.988324  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:45.988330  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:45.988395  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:46.015569  313474 cri.go:89] found id: ""
	I1202 21:16:46.015582  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.015590  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:46.015595  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:46.015656  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:46.042481  313474 cri.go:89] found id: ""
	I1202 21:16:46.042494  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.042511  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:46.042517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:46.042583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:46.076870  313474 cri.go:89] found id: ""
	I1202 21:16:46.076910  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.076918  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:46.076924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:46.076995  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:46.110449  313474 cri.go:89] found id: ""
	I1202 21:16:46.110490  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.110498  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:46.110514  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:46.110525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:46.188559  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:46.188579  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:46.188590  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:46.253578  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:46.253598  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:46.281754  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:46.281771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:46.338833  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:46.338850  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:48.855152  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:48.865294  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:48.865357  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:48.889825  313474 cri.go:89] found id: ""
	I1202 21:16:48.889839  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.889846  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:48.889852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:48.889911  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:48.913688  313474 cri.go:89] found id: ""
	I1202 21:16:48.913705  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.913712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:48.913718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:48.913781  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:48.937742  313474 cri.go:89] found id: ""
	I1202 21:16:48.937756  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.937763  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:48.937779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:48.937837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:48.961294  313474 cri.go:89] found id: ""
	I1202 21:16:48.961308  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.961315  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:48.961320  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:48.961378  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:48.985846  313474 cri.go:89] found id: ""
	I1202 21:16:48.985860  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.985866  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:48.985872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:48.985930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:49.014392  313474 cri.go:89] found id: ""
	I1202 21:16:49.014405  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.014412  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:49.014418  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:49.014478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:49.038987  313474 cri.go:89] found id: ""
	I1202 21:16:49.039000  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.039006  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:49.039014  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:49.039024  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:49.102227  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:49.102246  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:49.120563  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:49.120579  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:49.183266  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:49.183286  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:49.183297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:49.246439  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:49.246458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:51.775321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:51.785184  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:51.785254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:51.809810  313474 cri.go:89] found id: ""
	I1202 21:16:51.809824  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.809831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:51.809837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:51.809900  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:51.835767  313474 cri.go:89] found id: ""
	I1202 21:16:51.835795  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.835802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:51.835808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:51.835866  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:51.865885  313474 cri.go:89] found id: ""
	I1202 21:16:51.865900  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.865914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:51.865920  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:51.865980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:51.891809  313474 cri.go:89] found id: ""
	I1202 21:16:51.891823  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.891831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:51.891837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:51.891898  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:51.916253  313474 cri.go:89] found id: ""
	I1202 21:16:51.916267  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.916274  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:51.916280  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:51.916349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:51.941007  313474 cri.go:89] found id: ""
	I1202 21:16:51.941021  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.941028  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:51.941034  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:51.941093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:51.969353  313474 cri.go:89] found id: ""
	I1202 21:16:51.969368  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.969375  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:51.969382  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:51.969393  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:52.025261  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:52.025287  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:52.045534  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:52.045551  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:52.124972  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:52.124982  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:52.124993  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:52.189351  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:52.189372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:54.721393  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:54.732232  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:54.732290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:54.757594  313474 cri.go:89] found id: ""
	I1202 21:16:54.757608  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.757630  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:54.757671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:54.757734  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:54.783381  313474 cri.go:89] found id: ""
	I1202 21:16:54.783395  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.783402  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:54.783407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:54.783480  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:54.808177  313474 cri.go:89] found id: ""
	I1202 21:16:54.808198  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.808205  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:54.808211  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:54.808291  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:54.831293  313474 cri.go:89] found id: ""
	I1202 21:16:54.831307  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.831314  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:54.831331  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:54.831399  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:54.854343  313474 cri.go:89] found id: ""
	I1202 21:16:54.854357  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.854363  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:54.854368  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:54.854427  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:54.882636  313474 cri.go:89] found id: ""
	I1202 21:16:54.882650  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.882667  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:54.882673  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:54.882739  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:54.911098  313474 cri.go:89] found id: ""
	I1202 21:16:54.911112  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.911120  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:54.911128  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:54.911138  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:54.970728  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:54.970746  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:54.986382  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:54.986399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:55.069421  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:55.069437  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:55.069448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:55.151228  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:55.151266  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:57.687319  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:57.696959  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:57.697017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:57.720719  313474 cri.go:89] found id: ""
	I1202 21:16:57.720733  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.720740  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:57.720746  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:57.720811  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:57.749778  313474 cri.go:89] found id: ""
	I1202 21:16:57.749792  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.749800  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:57.749805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:57.749863  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:57.772871  313474 cri.go:89] found id: ""
	I1202 21:16:57.772884  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.772891  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:57.772896  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:57.772954  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:57.799916  313474 cri.go:89] found id: ""
	I1202 21:16:57.799931  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.799937  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:57.799943  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:57.800000  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:57.827165  313474 cri.go:89] found id: ""
	I1202 21:16:57.827179  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.827186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:57.827191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:57.827248  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:57.852136  313474 cri.go:89] found id: ""
	I1202 21:16:57.852150  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.852157  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:57.852166  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:57.852222  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:57.876624  313474 cri.go:89] found id: ""
	I1202 21:16:57.876638  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.876645  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:57.876654  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:57.876664  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:57.940462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:57.940473  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:57.940483  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:58.004519  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:58.004544  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:58.036463  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:58.036479  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:58.096205  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:58.096223  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.618984  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:00.629839  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:00.629906  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:00.661470  313474 cri.go:89] found id: ""
	I1202 21:17:00.661490  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.661498  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:00.661505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:00.661578  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:00.689166  313474 cri.go:89] found id: ""
	I1202 21:17:00.689182  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.689189  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:00.689202  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:00.689273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:00.716048  313474 cri.go:89] found id: ""
	I1202 21:17:00.716063  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.716070  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:00.716076  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:00.716143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:00.748003  313474 cri.go:89] found id: ""
	I1202 21:17:00.748017  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.748025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:00.748030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:00.748093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:00.779207  313474 cri.go:89] found id: ""
	I1202 21:17:00.779223  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.779231  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:00.779238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:00.779312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:00.805166  313474 cri.go:89] found id: ""
	I1202 21:17:00.805184  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.805194  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:00.805200  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:00.805273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:00.832311  313474 cri.go:89] found id: ""
	I1202 21:17:00.832326  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.832333  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:00.832342  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:00.832352  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:00.889599  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:00.889625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.906214  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:00.906230  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:00.978709  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:00.978720  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:00.978734  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:01.044083  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:01.044105  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.609427  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:03.620657  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:03.620726  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:03.651829  313474 cri.go:89] found id: ""
	I1202 21:17:03.651844  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.651851  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:03.651857  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:03.651923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:03.678868  313474 cri.go:89] found id: ""
	I1202 21:17:03.678889  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.678896  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:03.678902  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:03.678969  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:03.708792  313474 cri.go:89] found id: ""
	I1202 21:17:03.708806  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.708814  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:03.708820  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:03.708883  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:03.738501  313474 cri.go:89] found id: ""
	I1202 21:17:03.738516  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.738524  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:03.738531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:03.738604  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:03.770026  313474 cri.go:89] found id: ""
	I1202 21:17:03.770050  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.770057  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:03.770063  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:03.770127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:03.804285  313474 cri.go:89] found id: ""
	I1202 21:17:03.804300  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.804308  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:03.804324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:03.804391  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:03.831572  313474 cri.go:89] found id: ""
	I1202 21:17:03.831587  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.831594  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:03.831602  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:03.831613  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.860060  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:03.860086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:03.921719  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:03.921744  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:03.939033  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:03.939051  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:04.010810  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:04.010823  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:04.010835  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:06.576791  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:06.587693  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:06.587761  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:06.614477  313474 cri.go:89] found id: ""
	I1202 21:17:06.614493  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.614500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:06.614506  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:06.614571  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:06.641625  313474 cri.go:89] found id: ""
	I1202 21:17:06.641639  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.641646  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:06.641670  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:06.641735  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:06.667567  313474 cri.go:89] found id: ""
	I1202 21:17:06.667581  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.667588  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:06.667594  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:06.667657  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:06.694684  313474 cri.go:89] found id: ""
	I1202 21:17:06.694699  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.694706  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:06.694711  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:06.694777  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:06.723071  313474 cri.go:89] found id: ""
	I1202 21:17:06.723090  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.723097  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:06.723103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:06.723185  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:06.751448  313474 cri.go:89] found id: ""
	I1202 21:17:06.751462  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.751469  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:06.751476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:06.751544  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:06.781674  313474 cri.go:89] found id: ""
	I1202 21:17:06.781689  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.781697  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:06.781705  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:06.781723  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:06.812650  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:06.812669  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:06.874390  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:06.874410  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:06.891708  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:06.891726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:06.960203  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:06.960213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:06.960225  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:09.527222  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:09.537303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:09.537380  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:09.562091  313474 cri.go:89] found id: ""
	I1202 21:17:09.562112  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.562120  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:09.562125  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:09.562188  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:09.587772  313474 cri.go:89] found id: ""
	I1202 21:17:09.587786  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.587802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:09.587808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:09.587876  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:09.613205  313474 cri.go:89] found id: ""
	I1202 21:17:09.613224  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.613232  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:09.613238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:09.613298  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:09.639556  313474 cri.go:89] found id: ""
	I1202 21:17:09.639570  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.639577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:09.639583  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:09.639648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:09.668717  313474 cri.go:89] found id: ""
	I1202 21:17:09.668731  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.668737  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:09.668743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:09.668800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:09.692671  313474 cri.go:89] found id: ""
	I1202 21:17:09.692685  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.692693  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:09.692698  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:09.692756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:09.717454  313474 cri.go:89] found id: ""
	I1202 21:17:09.717468  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.717475  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:09.717484  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:09.717494  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:09.747114  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:09.747130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:09.803274  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:09.803294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:09.819246  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:09.819264  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:09.879465  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:09.879474  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:09.879485  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.443298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:12.453026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:12.453087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:12.479471  313474 cri.go:89] found id: ""
	I1202 21:17:12.479485  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.479492  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:12.479498  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:12.479559  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:12.503554  313474 cri.go:89] found id: ""
	I1202 21:17:12.503567  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.503575  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:12.503580  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:12.503637  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:12.528839  313474 cri.go:89] found id: ""
	I1202 21:17:12.528854  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.528861  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:12.528866  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:12.528943  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:12.553622  313474 cri.go:89] found id: ""
	I1202 21:17:12.553644  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.553663  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:12.553669  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:12.553737  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:12.579503  313474 cri.go:89] found id: ""
	I1202 21:17:12.579516  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.579523  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:12.579528  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:12.579583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:12.612312  313474 cri.go:89] found id: ""
	I1202 21:17:12.612327  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.612334  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:12.612339  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:12.612413  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:12.636613  313474 cri.go:89] found id: ""
	I1202 21:17:12.636628  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.636635  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:12.636642  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:12.636652  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:12.696881  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:12.696892  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:12.696903  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.758877  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:12.758898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:12.786233  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:12.786249  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:12.841290  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:12.841308  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.357945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:15.367765  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:15.367824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:15.395603  313474 cri.go:89] found id: ""
	I1202 21:17:15.395617  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.395624  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:15.395629  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:15.395688  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:15.418671  313474 cri.go:89] found id: ""
	I1202 21:17:15.418684  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.418691  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:15.418705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:15.418763  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:15.442594  313474 cri.go:89] found id: ""
	I1202 21:17:15.442607  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.442615  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:15.442624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:15.442680  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:15.466331  313474 cri.go:89] found id: ""
	I1202 21:17:15.466345  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.466352  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:15.466357  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:15.466416  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:15.491762  313474 cri.go:89] found id: ""
	I1202 21:17:15.491775  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.491782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:15.491788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:15.491847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:15.517473  313474 cri.go:89] found id: ""
	I1202 21:17:15.517487  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.517503  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:15.517509  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:15.517577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:15.544100  313474 cri.go:89] found id: ""
	I1202 21:17:15.544122  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.544129  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:15.544138  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:15.544148  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:15.570436  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:15.570453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:15.625879  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:15.625897  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.641070  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:15.641091  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:15.704897  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:15.704906  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:15.704916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:18.272120  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:18.282411  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:18.282474  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:18.310144  313474 cri.go:89] found id: ""
	I1202 21:17:18.310158  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.310165  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:18.310170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:18.310230  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:18.339624  313474 cri.go:89] found id: ""
	I1202 21:17:18.339637  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.339645  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:18.339650  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:18.339709  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:18.367236  313474 cri.go:89] found id: ""
	I1202 21:17:18.367252  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.367259  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:18.367265  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:18.367323  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:18.391197  313474 cri.go:89] found id: ""
	I1202 21:17:18.391213  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.391220  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:18.391226  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:18.391285  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:18.419753  313474 cri.go:89] found id: ""
	I1202 21:17:18.419768  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.419775  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:18.419780  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:18.419841  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:18.445569  313474 cri.go:89] found id: ""
	I1202 21:17:18.445588  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.445596  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:18.445601  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:18.445689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:18.471844  313474 cri.go:89] found id: ""
	I1202 21:17:18.471858  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.471865  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:18.471882  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:18.471893  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:18.500607  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:18.500623  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:18.556521  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:18.556540  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:18.572100  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:18.572115  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:18.637389  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:18.637399  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:18.637419  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:21.200861  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:21.210744  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:21.210815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:21.235330  313474 cri.go:89] found id: ""
	I1202 21:17:21.235344  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.235351  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:21.235356  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:21.235412  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:21.263273  313474 cri.go:89] found id: ""
	I1202 21:17:21.263287  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.263294  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:21.263299  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:21.263358  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:21.295429  313474 cri.go:89] found id: ""
	I1202 21:17:21.295443  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.295450  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:21.295455  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:21.295522  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:21.339988  313474 cri.go:89] found id: ""
	I1202 21:17:21.340017  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.340025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:21.340031  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:21.340094  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:21.366146  313474 cri.go:89] found id: ""
	I1202 21:17:21.366159  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.366166  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:21.366171  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:21.366234  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:21.396896  313474 cri.go:89] found id: ""
	I1202 21:17:21.396910  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.396917  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:21.396922  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:21.396980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:21.424236  313474 cri.go:89] found id: ""
	I1202 21:17:21.424249  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.424256  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:21.424273  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:21.424284  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:21.452897  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:21.452913  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:21.511384  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:21.511402  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:21.527095  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:21.527121  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:21.587938  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:21.587948  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:21.587958  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:24.156062  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:24.166297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:24.166383  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:24.194537  313474 cri.go:89] found id: ""
	I1202 21:17:24.194550  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.194558  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:24.194564  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:24.194624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:24.218699  313474 cri.go:89] found id: ""
	I1202 21:17:24.218714  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.218728  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:24.218734  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:24.218796  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:24.244266  313474 cri.go:89] found id: ""
	I1202 21:17:24.244280  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.244287  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:24.244292  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:24.244352  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:24.269104  313474 cri.go:89] found id: ""
	I1202 21:17:24.269117  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.269124  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:24.269129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:24.269186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:24.296650  313474 cri.go:89] found id: ""
	I1202 21:17:24.296663  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.296671  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:24.296677  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:24.296745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:24.323551  313474 cri.go:89] found id: ""
	I1202 21:17:24.323564  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.323572  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:24.323579  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:24.323648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:24.353085  313474 cri.go:89] found id: ""
	I1202 21:17:24.353109  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.353117  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:24.353126  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:24.353136  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:24.382045  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:24.382062  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:24.438756  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:24.438773  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:24.454650  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:24.454665  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:24.517340  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:24.517351  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:24.517371  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.081832  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:27.091605  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:27.091662  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:27.116713  313474 cri.go:89] found id: ""
	I1202 21:17:27.116726  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.116734  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:27.116739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:27.116801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:27.140809  313474 cri.go:89] found id: ""
	I1202 21:17:27.140823  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.140830  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:27.140835  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:27.140918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:27.167221  313474 cri.go:89] found id: ""
	I1202 21:17:27.167235  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.167242  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:27.167247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:27.167302  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:27.191660  313474 cri.go:89] found id: ""
	I1202 21:17:27.191674  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.191681  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:27.191686  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:27.191755  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:27.219696  313474 cri.go:89] found id: ""
	I1202 21:17:27.219719  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.219727  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:27.219732  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:27.219801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:27.247486  313474 cri.go:89] found id: ""
	I1202 21:17:27.247499  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.247506  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:27.247512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:27.247572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:27.270666  313474 cri.go:89] found id: ""
	I1202 21:17:27.270679  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.270687  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:27.270695  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:27.270704  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:27.329329  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:27.329349  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:27.350719  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:27.350735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:27.420274  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:27.420285  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:27.420338  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.487442  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:27.487462  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:30.014027  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:30.043373  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:30.043450  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:30.070998  313474 cri.go:89] found id: ""
	I1202 21:17:30.071012  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.071020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:30.071026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:30.071090  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:30.100616  313474 cri.go:89] found id: ""
	I1202 21:17:30.100630  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.100643  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:30.100649  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:30.100710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:30.130598  313474 cri.go:89] found id: ""
	I1202 21:17:30.130612  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.130620  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:30.130626  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:30.130687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:30.157465  313474 cri.go:89] found id: ""
	I1202 21:17:30.157479  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.157486  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:30.157492  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:30.157550  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:30.182842  313474 cri.go:89] found id: ""
	I1202 21:17:30.182857  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.182864  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:30.182870  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:30.182930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:30.211948  313474 cri.go:89] found id: ""
	I1202 21:17:30.211962  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.211969  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:30.211975  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:30.212034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:30.240992  313474 cri.go:89] found id: ""
	I1202 21:17:30.241006  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.241013  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:30.241020  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:30.241031  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:30.296604  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:30.296621  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:30.314431  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:30.314447  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:30.385351  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:30.385362  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:30.385372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:30.451748  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:30.451771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:32.983767  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:32.993977  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:32.994037  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:33.020180  313474 cri.go:89] found id: ""
	I1202 21:17:33.020195  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.020202  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:33.020208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:33.020280  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:33.048366  313474 cri.go:89] found id: ""
	I1202 21:17:33.048379  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.048386  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:33.048392  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:33.048453  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:33.075220  313474 cri.go:89] found id: ""
	I1202 21:17:33.075240  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.075247  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:33.075253  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:33.075326  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:33.099808  313474 cri.go:89] found id: ""
	I1202 21:17:33.099823  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.099831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:33.099837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:33.099897  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:33.124213  313474 cri.go:89] found id: ""
	I1202 21:17:33.124226  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.124233  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:33.124239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:33.124297  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:33.150102  313474 cri.go:89] found id: ""
	I1202 21:17:33.150116  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.150123  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:33.150129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:33.150190  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:33.174754  313474 cri.go:89] found id: ""
	I1202 21:17:33.174768  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.174775  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:33.174784  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:33.174794  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:33.243781  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:33.243791  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:33.243802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:33.306573  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:33.306592  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:33.336859  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:33.336876  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:33.398386  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:33.398404  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:35.914658  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:35.924718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:35.924778  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:35.950094  313474 cri.go:89] found id: ""
	I1202 21:17:35.950108  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.950114  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:35.950120  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:35.950182  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:35.974633  313474 cri.go:89] found id: ""
	I1202 21:17:35.974647  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.974654  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:35.974660  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:35.974719  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:35.998845  313474 cri.go:89] found id: ""
	I1202 21:17:35.998859  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.998866  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:35.998872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:35.998933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:36.027158  313474 cri.go:89] found id: ""
	I1202 21:17:36.027173  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.027186  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:36.027192  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:36.027259  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:36.052916  313474 cri.go:89] found id: ""
	I1202 21:17:36.052930  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.052937  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:36.052942  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:36.053002  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:36.078331  313474 cri.go:89] found id: ""
	I1202 21:17:36.078345  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.078353  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:36.078359  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:36.078421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:36.102917  313474 cri.go:89] found id: ""
	I1202 21:17:36.102935  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.102942  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:36.102952  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:36.102968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:36.170369  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:36.170381  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:36.170396  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:36.233123  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:36.233141  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:36.260318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:36.260336  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:36.318506  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:36.318525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:38.836941  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:38.847151  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:38.847224  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:38.875586  313474 cri.go:89] found id: ""
	I1202 21:17:38.875599  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.875606  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:38.875612  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:38.875671  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:38.898500  313474 cri.go:89] found id: ""
	I1202 21:17:38.898514  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.898530  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:38.898538  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:38.898601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:38.922709  313474 cri.go:89] found id: ""
	I1202 21:17:38.922723  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.922730  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:38.922735  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:38.922791  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:38.950687  313474 cri.go:89] found id: ""
	I1202 21:17:38.950701  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.950717  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:38.950723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:38.950789  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:38.973477  313474 cri.go:89] found id: ""
	I1202 21:17:38.973490  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.973506  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:38.973514  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:38.973590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:38.999179  313474 cri.go:89] found id: ""
	I1202 21:17:38.999193  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.999200  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:38.999206  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:38.999264  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:39.028981  313474 cri.go:89] found id: ""
	I1202 21:17:39.028995  313474 logs.go:282] 0 containers: []
	W1202 21:17:39.029002  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:39.029010  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:39.029019  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:39.091914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:39.091935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:39.118017  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:39.118033  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:39.174784  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:39.174803  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:39.190239  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:39.190254  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:39.253019  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:41.753253  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:41.763094  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:41.763167  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:41.787441  313474 cri.go:89] found id: ""
	I1202 21:17:41.787457  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.787464  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:41.787470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:41.787529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:41.815733  313474 cri.go:89] found id: ""
	I1202 21:17:41.815746  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.815753  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:41.815759  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:41.815819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:41.839039  313474 cri.go:89] found id: ""
	I1202 21:17:41.839053  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.839060  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:41.839065  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:41.839125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:41.867760  313474 cri.go:89] found id: ""
	I1202 21:17:41.867775  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.867783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:41.867796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:41.867860  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:41.894114  313474 cri.go:89] found id: ""
	I1202 21:17:41.894128  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.894135  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:41.894141  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:41.894202  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:41.918156  313474 cri.go:89] found id: ""
	I1202 21:17:41.918169  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.918177  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:41.918182  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:41.918242  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:41.942010  313474 cri.go:89] found id: ""
	I1202 21:17:41.942024  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.942032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:41.942040  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:41.942050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:41.971871  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:41.971886  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:42.031586  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:42.031606  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:42.050658  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:42.050675  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:42.125237  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:42.125249  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:42.125260  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:44.696530  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:44.706544  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:44.706605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:44.734450  313474 cri.go:89] found id: ""
	I1202 21:17:44.734464  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.734470  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:44.734476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:44.734535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:44.758091  313474 cri.go:89] found id: ""
	I1202 21:17:44.758104  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.758111  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:44.758116  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:44.758178  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:44.782611  313474 cri.go:89] found id: ""
	I1202 21:17:44.782624  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.782631  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:44.782637  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:44.782700  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:44.806667  313474 cri.go:89] found id: ""
	I1202 21:17:44.806681  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.806689  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:44.806695  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:44.806757  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:44.830007  313474 cri.go:89] found id: ""
	I1202 21:17:44.830021  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.830031  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:44.830036  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:44.830098  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:44.853880  313474 cri.go:89] found id: ""
	I1202 21:17:44.853894  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.853901  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:44.853907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:44.853970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:44.878619  313474 cri.go:89] found id: ""
	I1202 21:17:44.878633  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.878640  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:44.878647  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:44.878657  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:44.894269  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:44.894286  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:44.959621  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:44.959632  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:44.959645  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:45.023289  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:45.023311  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:45.085458  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:45.085476  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.687794  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:47.697486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:47.697557  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:47.723246  313474 cri.go:89] found id: ""
	I1202 21:17:47.723259  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.723266  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:47.723272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:47.723329  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:47.746713  313474 cri.go:89] found id: ""
	I1202 21:17:47.746726  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.746733  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:47.746739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:47.746798  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:47.771766  313474 cri.go:89] found id: ""
	I1202 21:17:47.771779  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.771786  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:47.771791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:47.771847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:47.795263  313474 cri.go:89] found id: ""
	I1202 21:17:47.795277  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.795284  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:47.795289  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:47.795349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:47.824522  313474 cri.go:89] found id: ""
	I1202 21:17:47.824536  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.824543  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:47.824548  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:47.824610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:47.849074  313474 cri.go:89] found id: ""
	I1202 21:17:47.849089  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.849096  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:47.849102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:47.849163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:47.878497  313474 cri.go:89] found id: ""
	I1202 21:17:47.878512  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.878518  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:47.878526  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:47.878537  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.935644  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:47.935663  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:47.951723  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:47.951739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:48.020401  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:48.020422  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:48.020434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:48.090722  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:48.090751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.621799  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:50.631705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:50.631774  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:50.656209  313474 cri.go:89] found id: ""
	I1202 21:17:50.656223  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.656230  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:50.656235  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:50.656300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:50.680929  313474 cri.go:89] found id: ""
	I1202 21:17:50.680943  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.680950  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:50.680955  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:50.681014  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:50.705769  313474 cri.go:89] found id: ""
	I1202 21:17:50.705783  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.705790  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:50.705796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:50.705858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:50.731506  313474 cri.go:89] found id: ""
	I1202 21:17:50.731519  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.731526  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:50.731531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:50.731588  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:50.754334  313474 cri.go:89] found id: ""
	I1202 21:17:50.754347  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.754354  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:50.754360  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:50.754421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:50.778142  313474 cri.go:89] found id: ""
	I1202 21:17:50.778154  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.778162  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:50.778170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:50.778228  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:50.801859  313474 cri.go:89] found id: ""
	I1202 21:17:50.801872  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.801880  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:50.801887  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:50.801898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:50.862528  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:50.862542  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:50.862553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:50.928955  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:50.928974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.960442  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:50.960458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:51.018671  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:51.018690  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.535533  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:53.550193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:53.550254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:53.579796  313474 cri.go:89] found id: ""
	I1202 21:17:53.579810  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.579817  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:53.579823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:53.579885  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:53.606043  313474 cri.go:89] found id: ""
	I1202 21:17:53.606057  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.606063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:53.606069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:53.606125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:53.631276  313474 cri.go:89] found id: ""
	I1202 21:17:53.631290  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.631297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:53.631303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:53.631360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:53.662387  313474 cri.go:89] found id: ""
	I1202 21:17:53.662400  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.662407  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:53.662412  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:53.662467  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:53.686744  313474 cri.go:89] found id: ""
	I1202 21:17:53.686758  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.686765  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:53.686771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:53.686832  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:53.710015  313474 cri.go:89] found id: ""
	I1202 21:17:53.710028  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.710035  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:53.710046  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:53.710102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:53.733042  313474 cri.go:89] found id: ""
	I1202 21:17:53.733056  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.733068  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:53.733076  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:53.733088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:53.789666  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:53.789726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.805097  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:53.805113  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:53.871790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:53.871801  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:53.871813  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:53.935260  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:53.935279  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:56.466348  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:56.476763  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:56.476830  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:56.501775  313474 cri.go:89] found id: ""
	I1202 21:17:56.501789  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.501795  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:56.501801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:56.501861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:56.526404  313474 cri.go:89] found id: ""
	I1202 21:17:56.526417  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.526424  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:56.526429  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:56.526487  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:56.555809  313474 cri.go:89] found id: ""
	I1202 21:17:56.555823  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.555845  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:56.555852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:56.555923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:56.586754  313474 cri.go:89] found id: ""
	I1202 21:17:56.586767  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.586794  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:56.586803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:56.586871  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:56.612048  313474 cri.go:89] found id: ""
	I1202 21:17:56.612061  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.612068  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:56.612074  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:56.612134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:56.636363  313474 cri.go:89] found id: ""
	I1202 21:17:56.636376  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.636383  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:56.636399  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:56.636456  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:56.668372  313474 cri.go:89] found id: ""
	I1202 21:17:56.668393  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.668400  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:56.668409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:56.668418  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:56.724439  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:56.724458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:56.740142  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:56.740161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:56.802960  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:56.802970  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:56.802981  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:56.870497  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:56.870516  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.400859  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:59.410723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:59.410792  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:59.434739  313474 cri.go:89] found id: ""
	I1202 21:17:59.434754  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.434761  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:59.434766  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:59.434823  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:59.459718  313474 cri.go:89] found id: ""
	I1202 21:17:59.459731  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.459738  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:59.459743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:59.459800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:59.484078  313474 cri.go:89] found id: ""
	I1202 21:17:59.484091  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.484098  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:59.484103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:59.484161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:59.510484  313474 cri.go:89] found id: ""
	I1202 21:17:59.510498  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.510505  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:59.510510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:59.510569  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:59.535191  313474 cri.go:89] found id: ""
	I1202 21:17:59.535204  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.535211  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:59.535217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:59.535278  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:59.566496  313474 cri.go:89] found id: ""
	I1202 21:17:59.566509  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.566516  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:59.566522  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:59.566591  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:59.605449  313474 cri.go:89] found id: ""
	I1202 21:17:59.605463  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.605470  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:59.605479  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:59.605492  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:59.670641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:59.670659  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.698362  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:59.698378  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:59.755057  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:59.755075  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:59.771334  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:59.771350  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:59.833359  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.334350  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:02.344576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:02.344646  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:02.372330  313474 cri.go:89] found id: ""
	I1202 21:18:02.372347  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.372355  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:02.372361  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:02.372421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:02.403621  313474 cri.go:89] found id: ""
	I1202 21:18:02.403635  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.403642  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:02.403648  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:02.403710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:02.432672  313474 cri.go:89] found id: ""
	I1202 21:18:02.432686  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.432693  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:02.432700  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:02.432762  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:02.464631  313474 cri.go:89] found id: ""
	I1202 21:18:02.464645  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.464652  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:02.464658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:02.464720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:02.491546  313474 cri.go:89] found id: ""
	I1202 21:18:02.491559  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.491566  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:02.491572  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:02.491628  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:02.515275  313474 cri.go:89] found id: ""
	I1202 21:18:02.515289  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.515296  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:02.515301  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:02.515361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:02.542560  313474 cri.go:89] found id: ""
	I1202 21:18:02.542574  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.542581  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:02.542589  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:02.542599  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:02.602107  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:02.602123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:02.624739  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:02.624757  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:02.689790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.689808  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:02.689819  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:02.752499  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:02.752518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.283528  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:05.293718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:05.293787  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:05.317745  313474 cri.go:89] found id: ""
	I1202 21:18:05.317758  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.317764  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:05.317770  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:05.317825  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:05.342721  313474 cri.go:89] found id: ""
	I1202 21:18:05.342735  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.342742  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:05.342747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:05.342805  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:05.367273  313474 cri.go:89] found id: ""
	I1202 21:18:05.367295  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.367303  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:05.367311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:05.367374  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:05.392617  313474 cri.go:89] found id: ""
	I1202 21:18:05.392630  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.392639  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:05.392644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:05.392720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:05.416853  313474 cri.go:89] found id: ""
	I1202 21:18:05.416866  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.416873  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:05.416878  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:05.416939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:05.440831  313474 cri.go:89] found id: ""
	I1202 21:18:05.440845  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.440852  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:05.440858  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:05.440925  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:05.468689  313474 cri.go:89] found id: ""
	I1202 21:18:05.468702  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.468709  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:05.468718  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:05.468728  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:05.532922  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:05.532931  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:05.532956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:05.603067  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:05.603086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.634107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:05.634125  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:05.690509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:05.690527  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.208420  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:08.218671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:08.218745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:08.244809  313474 cri.go:89] found id: ""
	I1202 21:18:08.244823  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.244831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:08.244837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:08.244895  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:08.270054  313474 cri.go:89] found id: ""
	I1202 21:18:08.270068  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.270075  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:08.270080  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:08.270145  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:08.295277  313474 cri.go:89] found id: ""
	I1202 21:18:08.295291  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.295298  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:08.295304  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:08.295366  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:08.319112  313474 cri.go:89] found id: ""
	I1202 21:18:08.319125  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.319132  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:08.319138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:08.319205  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:08.342874  313474 cri.go:89] found id: ""
	I1202 21:18:08.342888  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.342901  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:08.342908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:08.342965  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:08.371370  313474 cri.go:89] found id: ""
	I1202 21:18:08.371384  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.371391  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:08.371397  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:08.371464  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:08.396154  313474 cri.go:89] found id: ""
	I1202 21:18:08.396167  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.396175  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:08.396183  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:08.396193  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:08.451337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:08.451356  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.466550  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:08.466565  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:08.528549  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:08.528558  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:08.528569  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:08.606008  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:08.606028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:11.138262  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.148937  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:11.148998  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:11.173696  313474 cri.go:89] found id: ""
	I1202 21:18:11.173710  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.173718  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:11.173723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:11.173790  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:11.198792  313474 cri.go:89] found id: ""
	I1202 21:18:11.198805  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.198813  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:11.198818  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:11.198880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:11.222802  313474 cri.go:89] found id: ""
	I1202 21:18:11.222816  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.222823  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:11.222829  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:11.222890  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:11.247731  313474 cri.go:89] found id: ""
	I1202 21:18:11.247745  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.247752  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:11.247757  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:11.247814  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:11.272133  313474 cri.go:89] found id: ""
	I1202 21:18:11.272146  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.272153  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:11.272159  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:11.272217  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:11.296871  313474 cri.go:89] found id: ""
	I1202 21:18:11.296885  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.296892  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:11.296897  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:11.296958  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:11.321716  313474 cri.go:89] found id: ""
	I1202 21:18:11.321729  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.321736  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:11.321744  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:11.321754  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:11.377048  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:11.377066  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:11.393570  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:11.393587  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:11.458188  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:11.458204  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:11.458220  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:11.525584  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:11.525602  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.058201  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:14.068731  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:14.068793  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:14.095660  313474 cri.go:89] found id: ""
	I1202 21:18:14.095674  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.095682  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:14.095688  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:14.095754  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:14.122077  313474 cri.go:89] found id: ""
	I1202 21:18:14.122090  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.122097  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:14.122102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:14.122163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:14.150178  313474 cri.go:89] found id: ""
	I1202 21:18:14.150192  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.150199  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:14.150204  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:14.150265  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:14.175340  313474 cri.go:89] found id: ""
	I1202 21:18:14.175353  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.175360  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:14.175372  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:14.175431  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:14.199105  313474 cri.go:89] found id: ""
	I1202 21:18:14.199118  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.199125  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:14.199130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:14.199187  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:14.224274  313474 cri.go:89] found id: ""
	I1202 21:18:14.224288  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.224295  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:14.224300  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:14.224363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:14.251445  313474 cri.go:89] found id: ""
	I1202 21:18:14.251458  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.251465  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:14.251473  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:14.251487  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:14.320250  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:14.320261  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:14.320274  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:14.383255  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:14.383276  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.411409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:14.411425  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:14.472223  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:14.472248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:16.989804  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:17.000093  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:17.000155  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:17.028092  313474 cri.go:89] found id: ""
	I1202 21:18:17.028116  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.028124  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:17.028130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:17.028198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:17.052924  313474 cri.go:89] found id: ""
	I1202 21:18:17.052945  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.052952  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:17.052958  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:17.053029  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:17.078703  313474 cri.go:89] found id: ""
	I1202 21:18:17.078727  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.078734  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:17.078742  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:17.078812  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:17.104168  313474 cri.go:89] found id: ""
	I1202 21:18:17.104182  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.104189  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:17.104195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:17.104299  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:17.127996  313474 cri.go:89] found id: ""
	I1202 21:18:17.128010  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.128017  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:17.128023  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:17.128088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:17.152013  313474 cri.go:89] found id: ""
	I1202 21:18:17.152027  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.152034  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:17.152040  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:17.152100  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:17.180838  313474 cri.go:89] found id: ""
	I1202 21:18:17.180853  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.180860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:17.180868  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:17.180878  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:17.208724  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:17.208740  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:17.264017  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:17.264035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:17.280767  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:17.280783  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:17.347738  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:17.347749  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:17.347762  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:19.913786  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:19.923690  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:19.923756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:19.948485  313474 cri.go:89] found id: ""
	I1202 21:18:19.948499  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.948506  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:19.948512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:19.948572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:19.973040  313474 cri.go:89] found id: ""
	I1202 21:18:19.973054  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.973062  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:19.973067  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:19.973129  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:19.997059  313474 cri.go:89] found id: ""
	I1202 21:18:19.997073  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.997080  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:19.997086  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:19.997143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:20.023852  313474 cri.go:89] found id: ""
	I1202 21:18:20.023868  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.023876  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:20.023882  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:20.023963  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:20.050761  313474 cri.go:89] found id: ""
	I1202 21:18:20.050775  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.050782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:20.050788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:20.050849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:20.080281  313474 cri.go:89] found id: ""
	I1202 21:18:20.080299  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.080318  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:20.080324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:20.080396  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:20.104993  313474 cri.go:89] found id: ""
	I1202 21:18:20.105008  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.105015  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:20.105024  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:20.105035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:20.165434  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:20.165453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:20.181890  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:20.181907  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:20.248978  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:20.248989  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:20.249000  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:20.310960  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:20.310980  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:22.840884  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:22.851984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:22.852053  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:22.877753  313474 cri.go:89] found id: ""
	I1202 21:18:22.877766  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.877773  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:22.877779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:22.877837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:22.906410  313474 cri.go:89] found id: ""
	I1202 21:18:22.906424  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.906431  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:22.906437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:22.906500  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:22.930057  313474 cri.go:89] found id: ""
	I1202 21:18:22.930071  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.930077  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:22.930083  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:22.930143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:22.953434  313474 cri.go:89] found id: ""
	I1202 21:18:22.953447  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.953454  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:22.953460  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:22.953537  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:22.977521  313474 cri.go:89] found id: ""
	I1202 21:18:22.977534  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.977541  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:22.977546  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:22.977605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:23.002292  313474 cri.go:89] found id: ""
	I1202 21:18:23.002308  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.002316  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:23.002322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:23.002394  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:23.036373  313474 cri.go:89] found id: ""
	I1202 21:18:23.036387  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.036395  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:23.036403  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:23.036415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:23.095655  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:23.095673  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:23.111535  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:23.111553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:23.173705  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:23.173715  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:23.173726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:23.236268  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:23.236289  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:25.766078  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:25.775931  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:25.775992  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:25.803734  313474 cri.go:89] found id: ""
	I1202 21:18:25.803748  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.803755  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:25.803761  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:25.803819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:25.834986  313474 cri.go:89] found id: ""
	I1202 21:18:25.834998  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.835005  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:25.835011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:25.835067  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:25.868893  313474 cri.go:89] found id: ""
	I1202 21:18:25.868906  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.868914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:25.868919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:25.868978  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:25.893444  313474 cri.go:89] found id: ""
	I1202 21:18:25.893458  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.893465  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:25.893470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:25.893535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:25.920960  313474 cri.go:89] found id: ""
	I1202 21:18:25.920981  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.921016  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:25.921022  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:25.921084  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:25.945498  313474 cri.go:89] found id: ""
	I1202 21:18:25.945512  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.945519  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:25.945524  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:25.945584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:25.970324  313474 cri.go:89] found id: ""
	I1202 21:18:25.970338  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.970345  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:25.970352  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:25.970363  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:26.026110  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:26.026130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:26.042911  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:26.042929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:26.110842  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:26.110852  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:26.110863  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:26.172311  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:26.172331  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.700308  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:28.710060  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:28.710120  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:28.735161  313474 cri.go:89] found id: ""
	I1202 21:18:28.735174  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.735181  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:28.735186  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:28.735244  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:28.759111  313474 cri.go:89] found id: ""
	I1202 21:18:28.759125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.759132  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:28.759138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:28.759195  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:28.782985  313474 cri.go:89] found id: ""
	I1202 21:18:28.782999  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.783006  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:28.783011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:28.783069  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:28.820172  313474 cri.go:89] found id: ""
	I1202 21:18:28.820186  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.820203  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:28.820208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:28.820274  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:28.850833  313474 cri.go:89] found id: ""
	I1202 21:18:28.850846  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.850863  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:28.850869  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:28.850927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:28.882012  313474 cri.go:89] found id: ""
	I1202 21:18:28.882025  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.882032  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:28.882038  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:28.882093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:28.908111  313474 cri.go:89] found id: ""
	I1202 21:18:28.908125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.908132  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:28.908139  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:28.908150  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.934318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:28.934333  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:28.989499  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:28.989518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:29.007046  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:29.007064  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:29.083779  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:29.083789  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:29.083801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.646079  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:31.657486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:31.657549  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:31.683678  313474 cri.go:89] found id: ""
	I1202 21:18:31.683692  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.683699  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:31.683704  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:31.683759  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:31.712328  313474 cri.go:89] found id: ""
	I1202 21:18:31.712342  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.712349  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:31.712354  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:31.712410  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:31.736788  313474 cri.go:89] found id: ""
	I1202 21:18:31.736802  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.736808  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:31.736814  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:31.736870  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:31.761882  313474 cri.go:89] found id: ""
	I1202 21:18:31.761896  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.761903  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:31.761908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:31.761968  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:31.785756  313474 cri.go:89] found id: ""
	I1202 21:18:31.785770  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.785778  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:31.785783  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:31.785843  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:31.820411  313474 cri.go:89] found id: ""
	I1202 21:18:31.820424  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.820431  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:31.820437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:31.820493  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:31.853589  313474 cri.go:89] found id: ""
	I1202 21:18:31.853603  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.853611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:31.853619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:31.853630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:31.921797  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:31.921807  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:31.921818  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.983142  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:31.983161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:32.019032  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:32.019047  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:32.075826  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:32.075845  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.595298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:34.606306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:34.606370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:34.629306  313474 cri.go:89] found id: ""
	I1202 21:18:34.629321  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.629328  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:34.629334  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:34.629393  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:34.653285  313474 cri.go:89] found id: ""
	I1202 21:18:34.653299  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.653305  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:34.653311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:34.653369  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:34.679517  313474 cri.go:89] found id: ""
	I1202 21:18:34.679531  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.679538  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:34.679543  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:34.679601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:34.703382  313474 cri.go:89] found id: ""
	I1202 21:18:34.703395  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.703403  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:34.703409  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:34.703472  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:34.726696  313474 cri.go:89] found id: ""
	I1202 21:18:34.726710  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.726717  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:34.726723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:34.726784  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:34.751128  313474 cri.go:89] found id: ""
	I1202 21:18:34.751141  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.751148  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:34.751153  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:34.751213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:34.775011  313474 cri.go:89] found id: ""
	I1202 21:18:34.775025  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.775032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:34.775047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:34.775057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:34.835694  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:34.835712  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.852614  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:34.852628  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:34.915032  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:34.915042  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:34.915053  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:34.976914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:34.976933  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.512733  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:37.523297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:37.523360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:37.547452  313474 cri.go:89] found id: ""
	I1202 21:18:37.547471  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.547478  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:37.547484  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:37.547553  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:37.573439  313474 cri.go:89] found id: ""
	I1202 21:18:37.573453  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.573460  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:37.573471  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:37.573529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:37.597566  313474 cri.go:89] found id: ""
	I1202 21:18:37.597579  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.597586  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:37.597593  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:37.597689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:37.622743  313474 cri.go:89] found id: ""
	I1202 21:18:37.622757  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.622764  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:37.622769  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:37.622833  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:37.650998  313474 cri.go:89] found id: ""
	I1202 21:18:37.651012  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.651019  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:37.651024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:37.651082  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:37.675113  313474 cri.go:89] found id: ""
	I1202 21:18:37.675126  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.675133  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:37.675139  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:37.675198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:37.703998  313474 cri.go:89] found id: ""
	I1202 21:18:37.704011  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.704019  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:37.704028  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:37.704039  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.731894  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:37.731909  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:37.789286  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:37.789304  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:37.806026  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:37.806041  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:37.883651  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:37.883661  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:37.883672  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.449584  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:40.459754  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:40.459815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:40.484277  313474 cri.go:89] found id: ""
	I1202 21:18:40.484290  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.484297  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:40.484303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:40.484363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:40.512957  313474 cri.go:89] found id: ""
	I1202 21:18:40.512971  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.512978  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:40.512984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:40.513043  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:40.539344  313474 cri.go:89] found id: ""
	I1202 21:18:40.539357  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.539365  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:40.539371  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:40.539439  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:40.569762  313474 cri.go:89] found id: ""
	I1202 21:18:40.569776  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.569783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:40.569789  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:40.569865  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:40.599530  313474 cri.go:89] found id: ""
	I1202 21:18:40.599589  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.599597  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:40.599603  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:40.599663  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:40.624508  313474 cri.go:89] found id: ""
	I1202 21:18:40.624521  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.624527  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:40.624533  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:40.624590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:40.654772  313474 cri.go:89] found id: ""
	I1202 21:18:40.654786  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.654793  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:40.654800  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:40.654811  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:40.671128  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:40.671146  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:40.739442  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:40.739452  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:40.739465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.802579  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:40.802600  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:40.842887  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:40.842905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.407132  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:43.417207  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:43.417283  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:43.445187  313474 cri.go:89] found id: ""
	I1202 21:18:43.445201  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.445208  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:43.445214  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:43.445270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:43.469935  313474 cri.go:89] found id: ""
	I1202 21:18:43.469949  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.469957  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:43.469962  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:43.470021  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:43.495370  313474 cri.go:89] found id: ""
	I1202 21:18:43.495383  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.495391  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:43.495396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:43.495454  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:43.519120  313474 cri.go:89] found id: ""
	I1202 21:18:43.519133  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.519149  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:43.519155  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:43.519213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:43.548201  313474 cri.go:89] found id: ""
	I1202 21:18:43.548216  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.548223  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:43.548228  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:43.548290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:43.573077  313474 cri.go:89] found id: ""
	I1202 21:18:43.573091  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.573099  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:43.573104  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:43.573166  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:43.598032  313474 cri.go:89] found id: ""
	I1202 21:18:43.598046  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.598053  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:43.598062  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:43.598072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:43.625764  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:43.625780  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.681770  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:43.681787  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:43.698012  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:43.698028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:43.764049  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:43.764060  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:43.764071  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.332493  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:46.342812  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:46.342877  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:46.367988  313474 cri.go:89] found id: ""
	I1202 21:18:46.368002  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.368018  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:46.368024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:46.368091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:46.392483  313474 cri.go:89] found id: ""
	I1202 21:18:46.392496  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.392512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:46.392518  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:46.392574  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:46.429495  313474 cri.go:89] found id: ""
	I1202 21:18:46.429514  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.429522  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:46.429527  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:46.429598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:46.455204  313474 cri.go:89] found id: ""
	I1202 21:18:46.455218  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.455225  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:46.455231  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:46.455295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:46.479783  313474 cri.go:89] found id: ""
	I1202 21:18:46.479800  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.479808  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:46.479813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:46.479880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:46.504674  313474 cri.go:89] found id: ""
	I1202 21:18:46.504688  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.504696  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:46.504701  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:46.504767  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:46.534919  313474 cri.go:89] found id: ""
	I1202 21:18:46.534933  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.534940  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:46.534948  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:46.534968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:46.591507  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:46.591526  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:46.607216  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:46.607233  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:46.672448  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:46.672459  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:46.672469  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.738404  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:46.738424  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.269367  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:49.279307  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:49.279370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:49.302419  313474 cri.go:89] found id: ""
	I1202 21:18:49.302432  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.302439  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:49.302445  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:49.302501  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:49.328004  313474 cri.go:89] found id: ""
	I1202 21:18:49.328018  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.328025  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:49.328030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:49.328088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:49.352661  313474 cri.go:89] found id: ""
	I1202 21:18:49.352675  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.352682  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:49.352687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:49.352746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:49.377363  313474 cri.go:89] found id: ""
	I1202 21:18:49.377376  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.377383  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:49.377389  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:49.377447  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:49.401369  313474 cri.go:89] found id: ""
	I1202 21:18:49.401383  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.401390  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:49.401396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:49.401461  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:49.425207  313474 cri.go:89] found id: ""
	I1202 21:18:49.425221  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.425228  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:49.425233  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:49.425295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:49.451589  313474 cri.go:89] found id: ""
	I1202 21:18:49.451604  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.451611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:49.451619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:49.451630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:49.513462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:49.513472  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:49.513482  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:49.575782  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:49.575801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.610890  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:49.610906  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:49.667106  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:49.667123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.184506  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:52.194827  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:52.194887  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:52.221289  313474 cri.go:89] found id: ""
	I1202 21:18:52.221303  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.221310  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:52.221315  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:52.221385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:52.247152  313474 cri.go:89] found id: ""
	I1202 21:18:52.247167  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.247174  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:52.247179  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:52.247240  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:52.270523  313474 cri.go:89] found id: ""
	I1202 21:18:52.270539  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.270545  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:52.270550  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:52.270610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:52.294232  313474 cri.go:89] found id: ""
	I1202 21:18:52.294246  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.294253  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:52.294259  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:52.294321  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:52.322550  313474 cri.go:89] found id: ""
	I1202 21:18:52.322563  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.322570  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:52.322576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:52.322635  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:52.350081  313474 cri.go:89] found id: ""
	I1202 21:18:52.350095  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.350103  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:52.350110  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:52.350171  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:52.373782  313474 cri.go:89] found id: ""
	I1202 21:18:52.373796  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.373817  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:52.373826  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:52.373836  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:52.429396  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:52.429415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.445303  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:52.445319  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:52.509061  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:52.509073  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:52.509087  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:52.572171  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:52.572191  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:55.105321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:55.115684  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:55.115746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:55.143285  313474 cri.go:89] found id: ""
	I1202 21:18:55.143301  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.143313  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:55.143319  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:55.143379  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:55.168631  313474 cri.go:89] found id: ""
	I1202 21:18:55.168645  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.168652  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:55.168658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:55.168718  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:55.194277  313474 cri.go:89] found id: ""
	I1202 21:18:55.194290  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.194297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:55.194303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:55.194361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:55.221594  313474 cri.go:89] found id: ""
	I1202 21:18:55.221607  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.221614  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:55.221620  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:55.221738  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:55.245639  313474 cri.go:89] found id: ""
	I1202 21:18:55.245684  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.245691  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:55.245697  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:55.245758  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:55.270064  313474 cri.go:89] found id: ""
	I1202 21:18:55.270078  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.270085  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:55.270091  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:55.270151  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:55.298494  313474 cri.go:89] found id: ""
	I1202 21:18:55.298508  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.298515  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:55.298524  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:55.298534  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:55.354337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:55.354358  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:55.371291  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:55.371306  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:55.441025  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:55.441036  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:55.441048  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:55.508470  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:55.508491  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:58.040648  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:58.052163  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:58.052231  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:58.082641  313474 cri.go:89] found id: ""
	I1202 21:18:58.082655  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.082663  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:58.082668  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:58.082727  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:58.109547  313474 cri.go:89] found id: ""
	I1202 21:18:58.109561  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.109579  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:58.109585  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:58.109687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:58.134886  313474 cri.go:89] found id: ""
	I1202 21:18:58.134900  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.134908  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:58.134913  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:58.134973  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:58.158535  313474 cri.go:89] found id: ""
	I1202 21:18:58.158549  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.158555  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:58.158561  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:58.158626  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:58.181483  313474 cri.go:89] found id: ""
	I1202 21:18:58.181498  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.181505  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:58.181510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:58.181567  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:58.207661  313474 cri.go:89] found id: ""
	I1202 21:18:58.207675  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.207682  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:58.207687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:58.207744  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:58.231079  313474 cri.go:89] found id: ""
	I1202 21:18:58.231092  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.231099  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:58.231107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:58.231117  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:58.286068  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:58.286086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:58.301966  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:58.301983  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:58.371817  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:58.371827  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:58.371838  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:58.434916  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:58.434935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:00.970468  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:00.981089  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:00.981161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:01.007840  313474 cri.go:89] found id: ""
	I1202 21:19:01.007855  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.007863  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:01.007868  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:01.007927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:01.032203  313474 cri.go:89] found id: ""
	I1202 21:19:01.032217  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.032224  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:01.032229  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:01.032300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:01.065098  313474 cri.go:89] found id: ""
	I1202 21:19:01.065111  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.065119  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:01.065124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:01.065186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:01.091481  313474 cri.go:89] found id: ""
	I1202 21:19:01.091495  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.091502  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:01.091508  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:01.091584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:01.119523  313474 cri.go:89] found id: ""
	I1202 21:19:01.119538  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.119546  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:01.119552  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:01.119617  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:01.145559  313474 cri.go:89] found id: ""
	I1202 21:19:01.145574  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.145584  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:01.145590  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:01.145699  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:01.171870  313474 cri.go:89] found id: ""
	I1202 21:19:01.171885  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.171892  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:01.171900  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:01.171929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:01.236730  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:01.236741  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:01.236752  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:01.298712  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:01.298731  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:01.327192  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:01.327213  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:01.382852  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:01.382869  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:03.899143  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:03.908997  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:03.909059  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:03.932688  313474 cri.go:89] found id: ""
	I1202 21:19:03.932701  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.932708  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:03.932714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:03.932773  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:03.957073  313474 cri.go:89] found id: ""
	I1202 21:19:03.957087  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.957095  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:03.957100  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:03.957161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:03.981206  313474 cri.go:89] found id: ""
	I1202 21:19:03.981219  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.981233  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:03.981239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:03.981301  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:04.008306  313474 cri.go:89] found id: ""
	I1202 21:19:04.008322  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.008329  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:04.008335  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:04.008401  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:04.033825  313474 cri.go:89] found id: ""
	I1202 21:19:04.033839  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.033847  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:04.033853  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:04.033912  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:04.062862  313474 cri.go:89] found id: ""
	I1202 21:19:04.062876  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.062883  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:04.062890  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:04.062957  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:04.098358  313474 cri.go:89] found id: ""
	I1202 21:19:04.098372  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.098379  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:04.098388  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:04.098398  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:04.160856  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:04.160874  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:04.176607  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:04.176625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:04.239202  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:04.239213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:04.239224  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:04.304570  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:04.304588  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:06.834974  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:06.846425  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:06.846496  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:06.874499  313474 cri.go:89] found id: ""
	I1202 21:19:06.874513  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.874520  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:06.874526  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:06.874585  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:06.899405  313474 cri.go:89] found id: ""
	I1202 21:19:06.899419  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.899426  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:06.899432  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:06.899490  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:06.927927  313474 cri.go:89] found id: ""
	I1202 21:19:06.927940  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.927947  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:06.927953  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:06.928017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:06.956416  313474 cri.go:89] found id: ""
	I1202 21:19:06.956430  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.956437  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:06.956443  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:06.956503  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:06.982016  313474 cri.go:89] found id: ""
	I1202 21:19:06.982030  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.982038  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:06.982043  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:06.982102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:07.008744  313474 cri.go:89] found id: ""
	I1202 21:19:07.008758  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.008765  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:07.008771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:07.008831  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:07.051903  313474 cri.go:89] found id: ""
	I1202 21:19:07.051917  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.051924  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:07.051933  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:07.051956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:07.111866  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:07.111885  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:07.131193  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:07.131212  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:07.197137  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:07.197148  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:07.197159  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:07.258783  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:07.258802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:09.784238  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:09.795790  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:09.795850  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:09.821880  313474 cri.go:89] found id: ""
	I1202 21:19:09.821894  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.821902  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:09.821907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:09.821970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:09.845564  313474 cri.go:89] found id: ""
	I1202 21:19:09.845579  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.845586  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:09.845617  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:09.845698  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:09.874848  313474 cri.go:89] found id: ""
	I1202 21:19:09.874862  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.874875  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:09.874880  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:09.874939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:09.899396  313474 cri.go:89] found id: ""
	I1202 21:19:09.899410  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.899417  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:09.899423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:09.899485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:09.928207  313474 cri.go:89] found id: ""
	I1202 21:19:09.928231  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.928291  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:09.928297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:09.928367  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:09.953363  313474 cri.go:89] found id: ""
	I1202 21:19:09.953386  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.953393  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:09.953400  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:09.953478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:09.977852  313474 cri.go:89] found id: ""
	I1202 21:19:09.977866  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.977873  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:09.977881  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:09.977891  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:10.035535  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:10.035554  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:10.053223  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:10.053240  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:10.129538  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:10.129549  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:10.129561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:10.196069  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:10.196089  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:12.729098  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:12.739162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:12.739221  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:12.762279  313474 cri.go:89] found id: ""
	I1202 21:19:12.762293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.762300  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:12.762305  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:12.762405  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:12.787279  313474 cri.go:89] found id: ""
	I1202 21:19:12.787293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.787300  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:12.787306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:12.787364  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:12.812545  313474 cri.go:89] found id: ""
	I1202 21:19:12.812558  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.812566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:12.812571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:12.812642  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:12.840741  313474 cri.go:89] found id: ""
	I1202 21:19:12.840755  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.840762  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:12.840767  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:12.840824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:12.868898  313474 cri.go:89] found id: ""
	I1202 21:19:12.868912  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.868919  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:12.868924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:12.868983  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:12.895296  313474 cri.go:89] found id: ""
	I1202 21:19:12.895310  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.895317  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:12.895322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:12.895382  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:12.918838  313474 cri.go:89] found id: ""
	I1202 21:19:12.918852  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.918859  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:12.918867  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:12.918880  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:12.989410  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:12.989434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:13.018849  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:13.018864  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:13.075957  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:13.075976  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:13.095483  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:13.095501  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:13.160629  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.660888  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:15.670559  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:15.670624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:15.693948  313474 cri.go:89] found id: ""
	I1202 21:19:15.693961  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.693969  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:15.693974  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:15.694041  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:15.720374  313474 cri.go:89] found id: ""
	I1202 21:19:15.720389  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.720396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:15.720401  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:15.720460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:15.745246  313474 cri.go:89] found id: ""
	I1202 21:19:15.745259  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.745267  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:15.745272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:15.745339  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:15.772221  313474 cri.go:89] found id: ""
	I1202 21:19:15.772234  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.772241  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:15.772247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:15.772317  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:15.795604  313474 cri.go:89] found id: ""
	I1202 21:19:15.795618  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.795624  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:15.795630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:15.795687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:15.824167  313474 cri.go:89] found id: ""
	I1202 21:19:15.824180  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.824187  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:15.824193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:15.824252  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:15.847367  313474 cri.go:89] found id: ""
	I1202 21:19:15.847380  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.847387  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:15.847396  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:15.847406  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:15.901801  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:15.901820  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:15.917208  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:15.917228  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:15.976565  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.976575  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:15.976586  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:16.041174  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:16.041192  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.580269  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:18.590169  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:18.590245  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:18.615027  313474 cri.go:89] found id: ""
	I1202 21:19:18.615042  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.615049  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:18.615055  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:18.615135  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:18.640491  313474 cri.go:89] found id: ""
	I1202 21:19:18.640505  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.640512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:18.640517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:18.640584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:18.665078  313474 cri.go:89] found id: ""
	I1202 21:19:18.665092  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.665099  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:18.665105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:18.665162  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:18.689844  313474 cri.go:89] found id: ""
	I1202 21:19:18.689858  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.689865  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:18.689871  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:18.689928  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:18.715165  313474 cri.go:89] found id: ""
	I1202 21:19:18.715179  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.715186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:18.715191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:18.715250  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:18.740098  313474 cri.go:89] found id: ""
	I1202 21:19:18.740111  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.740118  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:18.740124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:18.740181  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:18.764406  313474 cri.go:89] found id: ""
	I1202 21:19:18.764420  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.764427  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:18.764435  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:18.764448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.795780  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:18.795801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:18.851180  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:18.851199  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:18.867072  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:18.867088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:18.932904  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:18.932917  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:18.932930  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.499766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:21.511750  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:21.511824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:21.541598  313474 cri.go:89] found id: ""
	I1202 21:19:21.541612  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.541619  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:21.541624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:21.541710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:21.565690  313474 cri.go:89] found id: ""
	I1202 21:19:21.565705  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.565712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:21.565717  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:21.565786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:21.588975  313474 cri.go:89] found id: ""
	I1202 21:19:21.588989  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.588996  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:21.589002  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:21.589060  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:21.616075  313474 cri.go:89] found id: ""
	I1202 21:19:21.616100  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.616108  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:21.616114  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:21.616189  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:21.640380  313474 cri.go:89] found id: ""
	I1202 21:19:21.640393  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.640410  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:21.640416  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:21.640473  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:21.664881  313474 cri.go:89] found id: ""
	I1202 21:19:21.664895  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.664912  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:21.664919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:21.664976  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:21.688940  313474 cri.go:89] found id: ""
	I1202 21:19:21.688961  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.688968  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:21.688976  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:21.688986  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:21.747031  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:21.747050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:21.762969  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:21.762988  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:21.829106  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:21.829117  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:21.829142  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.890717  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:21.890735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:24.418721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:24.428805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:24.428867  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:24.454807  313474 cri.go:89] found id: ""
	I1202 21:19:24.454820  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.454827  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:24.454844  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:24.454905  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:24.479376  313474 cri.go:89] found id: ""
	I1202 21:19:24.479390  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.479396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:24.479402  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:24.479459  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:24.504161  313474 cri.go:89] found id: ""
	I1202 21:19:24.504174  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.504181  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:24.504195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:24.504257  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:24.529438  313474 cri.go:89] found id: ""
	I1202 21:19:24.529452  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.529460  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:24.529466  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:24.529540  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:24.554237  313474 cri.go:89] found id: ""
	I1202 21:19:24.554251  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.554258  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:24.554264  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:24.554322  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:24.583978  313474 cri.go:89] found id: ""
	I1202 21:19:24.583992  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.583999  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:24.584005  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:24.584071  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:24.608672  313474 cri.go:89] found id: ""
	I1202 21:19:24.608686  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.608694  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:24.608702  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:24.608711  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:24.663382  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:24.663399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:24.678935  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:24.678953  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:24.741560  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:24.741571  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:24.741584  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:24.805991  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:24.806014  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:27.332486  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:27.343923  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:27.343980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:27.370846  313474 cri.go:89] found id: ""
	I1202 21:19:27.370862  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.370869  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:27.370874  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:27.370933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:27.394765  313474 cri.go:89] found id: ""
	I1202 21:19:27.394779  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.394786  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:27.394791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:27.394858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:27.418228  313474 cri.go:89] found id: ""
	I1202 21:19:27.418241  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.418248  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:27.418254  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:27.418312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:27.442428  313474 cri.go:89] found id: ""
	I1202 21:19:27.442441  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.442448  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:27.442454  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:27.442516  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:27.467409  313474 cri.go:89] found id: ""
	I1202 21:19:27.467423  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.467430  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:27.467435  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:27.467492  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:27.490186  313474 cri.go:89] found id: ""
	I1202 21:19:27.490200  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.490207  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:27.490213  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:27.490270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:27.515032  313474 cri.go:89] found id: ""
	I1202 21:19:27.515046  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.515054  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:27.515062  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:27.515072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:27.570118  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:27.570137  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:27.585958  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:27.585974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:27.649259  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:27.649269  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:27.649288  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:27.711120  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:27.711140  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:30.243770  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:30.255318  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:30.255385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:30.279952  313474 cri.go:89] found id: ""
	I1202 21:19:30.279966  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.279974  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:30.279979  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:30.280039  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:30.320036  313474 cri.go:89] found id: ""
	I1202 21:19:30.320049  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.320056  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:30.320061  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:30.320119  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:30.351365  313474 cri.go:89] found id: ""
	I1202 21:19:30.351378  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.351385  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:30.351391  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:30.351449  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:30.378208  313474 cri.go:89] found id: ""
	I1202 21:19:30.378221  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.378228  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:30.378234  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:30.378293  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:30.404248  313474 cri.go:89] found id: ""
	I1202 21:19:30.404262  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.404268  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:30.404274  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:30.404331  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:30.428678  313474 cri.go:89] found id: ""
	I1202 21:19:30.428691  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.428698  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:30.428714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:30.428786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:30.452008  313474 cri.go:89] found id: ""
	I1202 21:19:30.452021  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.452039  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:30.452047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:30.452057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:30.506509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:30.506530  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:30.522444  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:30.522464  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:30.585091  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:30.585102  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:30.585112  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:30.649461  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:30.649484  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:33.184340  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:33.195406  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:33.195468  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:33.220999  313474 cri.go:89] found id: ""
	I1202 21:19:33.221013  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.221020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:33.221026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:33.221087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:33.245046  313474 cri.go:89] found id: ""
	I1202 21:19:33.245060  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.245068  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:33.245073  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:33.245134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:33.268397  313474 cri.go:89] found id: ""
	I1202 21:19:33.268410  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.268417  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:33.268423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:33.268485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:33.304556  313474 cri.go:89] found id: ""
	I1202 21:19:33.304569  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.304577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:33.304582  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:33.304643  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:33.335992  313474 cri.go:89] found id: ""
	I1202 21:19:33.336006  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.336013  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:33.336019  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:33.336086  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:33.367967  313474 cri.go:89] found id: ""
	I1202 21:19:33.367980  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.367989  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:33.367995  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:33.368052  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:33.393839  313474 cri.go:89] found id: ""
	I1202 21:19:33.393853  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.393860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:33.393867  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:33.393877  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:33.448875  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:33.448894  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:33.464807  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:33.464822  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:33.531228  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:33.531238  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:33.531248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:33.592933  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:33.592951  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:36.121943  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:36.132447  313474 kubeadm.go:602] duration metric: took 4m4.151661323s to restartPrimaryControlPlane
	W1202 21:19:36.132510  313474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:19:36.132588  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:19:36.539188  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:19:36.552660  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:19:36.560203  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:19:36.560257  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:19:36.567605  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:19:36.567615  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:19:36.567669  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:19:36.575238  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:19:36.575292  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:19:36.582200  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:19:36.589483  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:19:36.589539  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:19:36.596652  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.604117  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:19:36.604180  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.611312  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:19:36.619074  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:19:36.619140  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:19:36.626580  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:19:36.665764  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:19:36.665850  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:19:36.739165  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:19:36.739244  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:19:36.739289  313474 kubeadm.go:319] OS: Linux
	I1202 21:19:36.739345  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:19:36.739401  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:19:36.739460  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:19:36.739515  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:19:36.739574  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:19:36.739631  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:19:36.739681  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:19:36.739743  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:19:36.739800  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:19:36.802641  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:19:36.802776  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:19:36.802889  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:19:36.810139  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:19:36.815519  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:19:36.815612  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:19:36.815684  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:19:36.815766  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:19:36.815832  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:19:36.815906  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:19:36.815965  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:19:36.816035  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:19:36.816096  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:19:36.816180  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:19:36.816258  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:19:36.816301  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:19:36.816363  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:19:36.979466  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:19:37.030688  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:19:37.178864  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:19:37.287458  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:19:37.759486  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:19:37.759977  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:19:37.764136  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:19:37.767507  313474 out.go:252]   - Booting up control plane ...
	I1202 21:19:37.767615  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:19:37.767697  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:19:37.768187  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:19:37.789119  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:19:37.789389  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:19:37.796801  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:19:37.797075  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:19:37.797116  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:19:37.935526  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:19:37.935655  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:23:37.935181  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000055683s
	I1202 21:23:37.935206  313474 kubeadm.go:319] 
	I1202 21:23:37.935262  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:23:37.935294  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:23:37.935397  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:23:37.935402  313474 kubeadm.go:319] 
	I1202 21:23:37.935505  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:23:37.935535  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:23:37.935565  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:23:37.935567  313474 kubeadm.go:319] 
	I1202 21:23:37.939509  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:23:37.940015  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:23:37.940174  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:23:37.940488  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:23:37.940494  313474 kubeadm.go:319] 
	I1202 21:23:37.940592  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:23:37.940735  313474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000055683s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:23:37.940819  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:23:38.352160  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:23:38.364903  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:23:38.364957  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:23:38.373626  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:23:38.373635  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:23:38.373703  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:23:38.380912  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:23:38.380966  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:23:38.387986  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:23:38.395511  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:23:38.395567  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:23:38.403067  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.410435  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:23:38.410491  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.417648  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:23:38.425411  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:23:38.425466  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:23:38.432690  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:23:38.469901  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:23:38.470170  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:23:38.543545  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:23:38.543611  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:23:38.543646  313474 kubeadm.go:319] OS: Linux
	I1202 21:23:38.543689  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:23:38.543736  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:23:38.543782  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:23:38.543829  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:23:38.543876  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:23:38.543922  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:23:38.543966  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:23:38.544013  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:23:38.544058  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:23:38.612266  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:23:38.612377  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:23:38.612479  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:23:38.617939  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:23:38.623176  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:23:38.623272  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:23:38.623347  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:23:38.623429  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:23:38.623494  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:23:38.623569  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:23:38.623628  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:23:38.623699  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:23:38.623765  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:23:38.623849  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:23:38.623933  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:23:38.623979  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:23:38.624034  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:23:39.195644  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:23:40.418759  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:23:40.662567  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:23:41.331428  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:23:41.582387  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:23:41.582932  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:23:41.585414  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:23:41.588388  313474 out.go:252]   - Booting up control plane ...
	I1202 21:23:41.588487  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:23:41.588564  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:23:41.588629  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:23:41.609723  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:23:41.609836  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:23:41.617428  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:23:41.617997  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:23:41.618040  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:23:41.754122  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:23:41.754238  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:27:41.753164  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001114938s
	I1202 21:27:41.753189  313474 kubeadm.go:319] 
	I1202 21:27:41.753242  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:27:41.753272  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:27:41.753369  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:27:41.753373  313474 kubeadm.go:319] 
	I1202 21:27:41.753470  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:27:41.753499  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:27:41.753527  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:27:41.753530  313474 kubeadm.go:319] 
	I1202 21:27:41.757163  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:27:41.757586  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:27:41.757709  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:27:41.757943  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:27:41.757948  313474 kubeadm.go:319] 
	I1202 21:27:41.758016  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:27:41.758065  313474 kubeadm.go:403] duration metric: took 12m9.810714629s to StartCluster
	I1202 21:27:41.758097  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:27:41.758157  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:27:41.783479  313474 cri.go:89] found id: ""
	I1202 21:27:41.783492  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.783500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:27:41.783505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:27:41.783577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:27:41.814610  313474 cri.go:89] found id: ""
	I1202 21:27:41.814624  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.814631  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:27:41.814644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:27:41.814702  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:27:41.844545  313474 cri.go:89] found id: ""
	I1202 21:27:41.844559  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.844566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:27:41.844571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:27:41.844630  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:27:41.876235  313474 cri.go:89] found id: ""
	I1202 21:27:41.876250  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.876257  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:27:41.876262  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:27:41.876320  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:27:41.899944  313474 cri.go:89] found id: ""
	I1202 21:27:41.899957  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.899964  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:27:41.899969  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:27:41.900027  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:27:41.924640  313474 cri.go:89] found id: ""
	I1202 21:27:41.924653  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.924660  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:27:41.924666  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:27:41.924723  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:27:41.951344  313474 cri.go:89] found id: ""
	I1202 21:27:41.951358  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.951365  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:27:41.951373  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:27:41.951383  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:27:42.009004  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:27:42.009028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:27:42.033968  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:27:42.033989  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:27:42.114849  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:27:42.114863  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:27:42.114875  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:27:42.193571  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:27:42.193593  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:27:42.259231  313474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:27:42.259270  313474 out.go:285] * 
	W1202 21:27:42.259601  313474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.259616  313474 out.go:285] * 
	W1202 21:27:42.262291  313474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:27:42.269405  313474 out.go:203] 
	W1202 21:27:42.272139  313474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.272287  313474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:27:42.272371  313474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:27:42.276351  313474 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609401010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609411414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609426076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609435913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609447105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609462194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609496941Z" level=info msg="runtime interface created"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609503784Z" level=info msg="created NRI interface"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609513794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609548107Z" level=info msg="Connect containerd service"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609923390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.610459300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.628985739Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.629232566Z" level=info msg="Start recovering state"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630271509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630432538Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655348240Z" level=info msg="Start event monitor"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655522692Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655586969Z" level=info msg="Start streaming server"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655657638Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655717968Z" level=info msg="runtime interface starting up..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655774631Z" level=info msg="starting plugins..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655837464Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:15:30 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.657496581Z" level=info msg="containerd successfully booted in 0.074787s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:45.826910   21735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:45.828286   21735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:45.829230   21735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:45.833821   21735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:45.834436   21735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:27:45 up  3:10,  0 user,  load average: 0.16, 0.21, 0.52
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:27:42 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 02 21:27:43 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:43 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:43 functional-753958 kubelet[21555]: E1202 21:27:43.349766   21555 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:43 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 02 21:27:44 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:44 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:44 functional-753958 kubelet[21606]: E1202 21:27:44.104101   21606 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 02 21:27:44 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:44 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:44 functional-753958 kubelet[21641]: E1202 21:27:44.848247   21641 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:44 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:27:45 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 02 21:27:45 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:45 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:27:45 functional-753958 kubelet[21664]: E1202 21:27:45.600914   21664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:27:45 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:27:45 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (326.316226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-753958 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-753958 apply -f testdata/invalidsvc.yaml: exit status 1 (59.315047ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-753958 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-753958 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-753958 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-753958 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-753958 --alsologtostderr -v=1] stderr:
I1202 21:29:59.310874  330824 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:59.311019  330824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:59.311037  330824 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:59.311054  330824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:59.311354  330824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:29:59.311642  330824 mustload.go:66] Loading cluster: functional-753958
I1202 21:29:59.312074  330824 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:29:59.312585  330824 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:29:59.329445  330824 host.go:66] Checking if "functional-753958" exists ...
I1202 21:29:59.329787  330824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:29:59.382900  330824 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.374170926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:29:59.383024  330824 api_server.go:166] Checking apiserver status ...
I1202 21:29:59.383085  330824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 21:29:59.383138  330824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:29:59.401426  330824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
W1202 21:29:59.506793  330824 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 21:29:59.509947  330824 out.go:179] * The control-plane node functional-753958 apiserver is not running: (state=Stopped)
I1202 21:29:59.512828  330824 out.go:179]   To start a cluster, run: "minikube start -p functional-753958"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (309.16759ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 logs -n 25: (1.13072141s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-753958 service hello-node --url --format={{.IP}}                                                                                         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service   │ functional-753958 service hello-node --url                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh -- ls -la /mount-9p                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh cat /mount-9p/test-1764710990148751834                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh sudo umount -f /mount-9p                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3541532396/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh -- ls -la /mount-9p                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh sudo umount -f /mount-9p                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount1 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount2 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount1                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount3 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount2                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh findmnt -T /mount3                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount     │ -p functional-753958 --kill=true                                                                                                                    │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-753958 --alsologtostderr -v=1                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:29:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:29:59.074504  330745 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:59.074728  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.074756  330745 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:59.074776  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.075071  330745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:29:59.075484  330745 out.go:368] Setting JSON to false
	I1202 21:29:59.076394  330745 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11537,"bootTime":1764699462,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:29:59.076493  330745 start.go:143] virtualization:  
	I1202 21:29:59.083017  330745 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:29:59.086254  330745 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:59.086368  330745 notify.go:221] Checking for updates...
	I1202 21:29:59.091916  330745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:59.094811  330745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:29:59.098258  330745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:29:59.101319  330745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:59.104119  330745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:59.107443  330745 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:59.108020  330745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:59.133893  330745 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:59.133996  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.195113  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.186387815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.195217  330745 docker.go:319] overlay module found
	I1202 21:29:59.198460  330745 out.go:179] * Using the docker driver based on existing profile
	I1202 21:29:59.201271  330745 start.go:309] selected driver: docker
	I1202 21:29:59.201292  330745 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.201401  330745 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:59.201516  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.254323  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.245065143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.254742  330745 cni.go:84] Creating CNI manager for ""
	I1202 21:29:59.254813  330745 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:29:59.254857  330745 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.257859  330745 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609401010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609411414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609426076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609435913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609447105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609462194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609496941Z" level=info msg="runtime interface created"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609503784Z" level=info msg="created NRI interface"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609513794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609548107Z" level=info msg="Connect containerd service"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609923390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.610459300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.628985739Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.629232566Z" level=info msg="Start recovering state"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630271509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630432538Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655348240Z" level=info msg="Start event monitor"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655522692Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655586969Z" level=info msg="Start streaming server"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655657638Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655717968Z" level=info msg="runtime interface starting up..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655774631Z" level=info msg="starting plugins..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655837464Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:15:30 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.657496581Z" level=info msg="containerd successfully booted in 0.074787s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:30:00.978396   23881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:00.979243   23881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:00.980344   23881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:00.980921   23881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:00.982717   23881 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:30:01 up  3:12,  0 user,  load average: 0.81, 0.32, 0.51
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:29:57 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:58 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 502.
	Dec 02 21:29:58 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:58 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:58 functional-753958 kubelet[23739]: E1202 21:29:58.366181   23739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:58 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:58 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 503.
	Dec 02 21:29:59 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:59 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:59 functional-753958 kubelet[23759]: E1202 21:29:59.100319   23759 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 504.
	Dec 02 21:29:59 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:59 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:59 functional-753958 kubelet[23776]: E1202 21:29:59.839851   23776 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:59 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:30:00 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 505.
	Dec 02 21:30:00 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:00 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:00 functional-753958 kubelet[23803]: E1202 21:30:00.681679   23803 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:30:00 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:30:00 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (462.489588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 status: exit status 2 (306.670882ms)

                                                
                                                
-- stdout --
	functional-753958
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-753958 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (304.956864ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-753958 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 status -o json: exit status 2 (318.856271ms)

                                                
                                                
-- stdout --
	{"Name":"functional-753958","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-753958 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (331.202508ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ functional-753958 addons list -o json                                                                                                               │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ service │ functional-753958 service list                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service │ functional-753958 service list -o json                                                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service │ functional-753958 service --namespace=default --https --url hello-node                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service │ functional-753958 service hello-node --url --format={{.IP}}                                                                                         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ service │ functional-753958 service hello-node --url                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount   │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh -- ls -la /mount-9p                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh cat /mount-9p/test-1764710990148751834                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh sudo umount -f /mount-9p                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount   │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3541532396/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh -- ls -la /mount-9p                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh sudo umount -f /mount-9p                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount   │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount1 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ mount   │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount2 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh findmnt -T /mount1                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount   │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount3 --alsologtostderr -v=1                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh     │ functional-753958 ssh findmnt -T /mount2                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh     │ functional-753958 ssh findmnt -T /mount3                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount   │ -p functional-753958 --kill=true                                                                                                                    │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:15:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:15:27.807151  313474 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:15:27.807260  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807264  313474 out.go:374] Setting ErrFile to fd 2...
	I1202 21:15:27.807268  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807610  313474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:15:27.808015  313474 out.go:368] Setting JSON to false
	I1202 21:15:27.809366  313474 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10666,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:15:27.809431  313474 start.go:143] virtualization:  
	I1202 21:15:27.812823  313474 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:15:27.815796  313474 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:15:27.816009  313474 notify.go:221] Checking for updates...
	I1202 21:15:27.821378  313474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:15:27.824158  313474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:15:27.826979  313474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:15:27.829780  313474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:15:27.832616  313474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:15:27.835951  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:27.836043  313474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:15:27.868236  313474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:15:27.868329  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.931411  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.921542243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.931507  313474 docker.go:319] overlay module found
	I1202 21:15:27.934670  313474 out.go:179] * Using the docker driver based on existing profile
	I1202 21:15:27.937620  313474 start.go:309] selected driver: docker
	I1202 21:15:27.937631  313474 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.937764  313474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:15:27.937862  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.995269  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.986382161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.995660  313474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:15:27.995688  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:27.995745  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:27.995788  313474 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.998840  313474 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:15:28.001915  313474 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:15:28.005631  313474 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:15:28.008845  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:28.008946  313474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:15:28.029517  313474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:15:28.029530  313474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:15:28.078709  313474 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:15:28.277463  313474 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:15:28.277635  313474 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:15:28.277718  313474 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277817  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:15:28.277826  313474 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.54µs
	I1202 21:15:28.277840  313474 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:15:28.277851  313474 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277891  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:15:28.277896  313474 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.374µs
	I1202 21:15:28.277901  313474 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277913  313474 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:15:28.277910  313474 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277949  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:15:28.277954  313474 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.659µs
	I1202 21:15:28.277951  313474 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277959  313474 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277969  313474 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277991  313474 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-753958"
	I1202 21:15:28.277998  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:15:28.278004  313474 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:15:28.278003  313474 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.797µs
	I1202 21:15:28.278008  313474 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278008  313474 fix.go:54] fixHost starting: 
	I1202 21:15:28.278015  313474 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278051  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:15:28.278067  313474 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.63µs
	I1202 21:15:28.278075  313474 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278084  313474 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278133  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:15:28.278144  313474 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 58.148µs
	I1202 21:15:28.278154  313474 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:15:28.278163  313474 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278201  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:15:28.278206  313474 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 44.323µs
	I1202 21:15:28.278211  313474 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:15:28.278227  313474 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278272  313474 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:15:28.278274  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:15:28.278279  313474 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 53.693µs
	I1202 21:15:28.278284  313474 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:15:28.278293  313474 cache.go:87] Successfully saved all images to host disk.
	I1202 21:15:28.303149  313474 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:15:28.303168  313474 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:15:28.306592  313474 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:15:28.306620  313474 machine.go:94] provisionDockerMachine start ...
	I1202 21:15:28.306711  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.331641  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.331992  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.331999  313474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:15:28.485262  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.485277  313474 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:15:28.485346  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.502136  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.502454  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.502463  313474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:15:28.662872  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.662941  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.680996  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.681283  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.681296  313474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:15:28.829833  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:15:28.829849  313474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:15:28.829870  313474 ubuntu.go:190] setting up certificates
	I1202 21:15:28.829878  313474 provision.go:84] configureAuth start
	I1202 21:15:28.829936  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:28.847119  313474 provision.go:143] copyHostCerts
	I1202 21:15:28.847182  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:15:28.847194  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:15:28.847267  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:15:28.847367  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:15:28.847372  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:15:28.847403  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:15:28.847459  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:15:28.847462  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:15:28.847485  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:15:28.847574  313474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:15:28.960674  313474 provision.go:177] copyRemoteCerts
	I1202 21:15:28.960733  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:15:28.960772  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.978043  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.081719  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:15:29.105765  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:15:29.122371  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:15:29.139343  313474 provision.go:87] duration metric: took 309.452187ms to configureAuth
	I1202 21:15:29.139359  313474 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:15:29.139545  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:29.139550  313474 machine.go:97] duration metric: took 832.92543ms to provisionDockerMachine
	I1202 21:15:29.139557  313474 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:15:29.139567  313474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:15:29.139623  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:15:29.139660  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.156608  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.261796  313474 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:15:29.265154  313474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:15:29.265170  313474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:15:29.265181  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:15:29.265234  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:15:29.265309  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:15:29.265381  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:15:29.265422  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:15:29.272853  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:29.290463  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:15:29.307373  313474 start.go:296] duration metric: took 167.802474ms for postStartSetup
	I1202 21:15:29.307459  313474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:15:29.307497  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.324791  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.426726  313474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:15:29.431481  313474 fix.go:56] duration metric: took 1.153466989s for fixHost
	I1202 21:15:29.431495  313474 start.go:83] releasing machines lock for "functional-753958", held for 1.153497537s
	I1202 21:15:29.431566  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:29.447801  313474 ssh_runner.go:195] Run: cat /version.json
	I1202 21:15:29.447846  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.447885  313474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:15:29.447935  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.467421  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.471596  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.659911  313474 ssh_runner.go:195] Run: systemctl --version
	I1202 21:15:29.666244  313474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:15:29.670444  313474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:15:29.670514  313474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:15:29.678098  313474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:15:29.678112  313474 start.go:496] detecting cgroup driver to use...
	I1202 21:15:29.678141  313474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:15:29.678186  313474 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:15:29.694041  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:15:29.710665  313474 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:15:29.710716  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:15:29.728421  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:15:29.743568  313474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:15:29.860902  313474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:15:29.989688  313474 docker.go:234] disabling docker service ...
	I1202 21:15:29.989770  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:15:30.008558  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:15:30.033480  313474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:15:30.168415  313474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:15:30.289508  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:15:30.302465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:15:30.316926  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:15:30.325512  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:15:30.334372  313474 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:15:30.334439  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:15:30.343106  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.351679  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:15:30.359860  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.368460  313474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:15:30.376324  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:15:30.384579  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:15:30.393108  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:15:30.401480  313474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:15:30.408867  313474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:15:30.415924  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.533792  313474 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:15:30.657833  313474 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:15:30.657894  313474 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:15:30.661737  313474 start.go:564] Will wait 60s for crictl version
	I1202 21:15:30.661805  313474 ssh_runner.go:195] Run: which crictl
	I1202 21:15:30.665271  313474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:15:30.691831  313474 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:15:30.691893  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.710586  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.734130  313474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:15:30.737177  313474 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:15:30.753095  313474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:15:30.760367  313474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:15:30.763216  313474 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:15:30.763354  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:30.763426  313474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:15:30.788120  313474 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:15:30.788132  313474 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:15:30.788138  313474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:15:30.788245  313474 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:15:30.788311  313474 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:15:30.816149  313474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:15:30.816166  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:30.816175  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:30.816190  313474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:15:30.816220  313474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:15:30.816350  313474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:15:30.816417  313474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:15:30.824592  313474 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:15:30.824650  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:15:30.832172  313474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:15:30.844549  313474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:15:30.856965  313474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1202 21:15:30.869111  313474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:15:30.872973  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.993888  313474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:15:31.292555  313474 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:15:31.292567  313474 certs.go:195] generating shared ca certs ...
	I1202 21:15:31.292581  313474 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:15:31.292714  313474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:15:31.292766  313474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:15:31.292772  313474 certs.go:257] generating profile certs ...
	I1202 21:15:31.292864  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:15:31.292921  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:15:31.292963  313474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:15:31.293076  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:15:31.293105  313474 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:15:31.293112  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:15:31.293138  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:15:31.293160  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:15:31.293184  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:15:31.293230  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:31.293875  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:15:31.313092  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:15:31.332062  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:15:31.351302  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:15:31.370658  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:15:31.387720  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:15:31.405248  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:15:31.422664  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:15:31.440135  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:15:31.457687  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:15:31.475495  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:15:31.492183  313474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:15:31.504166  313474 ssh_runner.go:195] Run: openssl version
	I1202 21:15:31.510525  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:15:31.518840  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522541  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522596  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.563265  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:15:31.571112  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:15:31.579437  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583195  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583250  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.628890  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:15:31.636777  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:15:31.644711  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648206  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648271  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.689010  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:15:31.696812  313474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:15:31.700482  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:15:31.740999  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:15:31.782731  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:15:31.823250  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:15:31.865611  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:15:31.906492  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:15:31.947359  313474 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:31.947441  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:15:31.947511  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:31.973182  313474 cri.go:89] found id: ""
	I1202 21:15:31.973243  313474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:15:31.980768  313474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:15:31.980777  313474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:15:31.980838  313474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:15:31.988019  313474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:31.988518  313474 kubeconfig.go:125] found "functional-753958" server: "https://192.168.49.2:8441"
	I1202 21:15:31.989827  313474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:15:31.997696  313474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:00:56.754776837 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:15:30.864977782 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:15:31.997711  313474 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:15:31.997724  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1202 21:15:31.997791  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:32.028400  313474 cri.go:89] found id: ""
	I1202 21:15:32.028460  313474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:15:32.046252  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:15:32.054174  313474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 21:05 /etc/kubernetes/scheduler.conf
	
	I1202 21:15:32.054235  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:15:32.061845  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:15:32.069217  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.069283  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:15:32.076901  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.084278  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.084333  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.091360  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:15:32.098582  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.098635  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:15:32.105786  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:15:32.113101  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:32.157271  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.778908  313474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.621612732s)
	I1202 21:15:33.778983  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.980110  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.046494  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.096642  313474 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:15:34.096721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:34.596907  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.097723  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.597306  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.096830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.096902  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.597594  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.097418  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.596863  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.096945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.596885  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.097285  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.597766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.097086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.597610  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.097762  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.597458  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.097372  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.596919  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.096844  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.597785  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.097138  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.597877  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.096835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.596922  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.097709  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.597777  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.097634  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.597037  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.097698  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.597298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.097150  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.596854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.097637  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.097490  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.597734  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.097878  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.597585  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.097045  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.596935  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.096967  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.597277  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.097741  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.597498  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.097835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.596980  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.097825  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.597397  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.097737  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.597771  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.097000  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.096857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.596807  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.096858  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.596921  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.097782  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.597168  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.097826  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.597834  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.597015  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.097323  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.596890  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.096868  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.597441  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.097848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.596805  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.096809  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.597086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.097186  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.597613  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.096962  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.596871  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.097854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.596857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.096839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.596917  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.097213  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.596830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.097886  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.597752  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.096793  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.597667  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.096901  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.597296  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.097838  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.597565  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.097476  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.597700  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.597010  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.097503  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.596848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.096818  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.596913  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.097537  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.596855  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.096911  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.596909  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.097013  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.596904  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.097839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.596939  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.097272  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.597856  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.097301  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.596953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.096893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.597192  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.097860  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.597517  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.097502  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.597497  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.097081  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.597504  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.097354  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:34.097219  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:34.097318  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:34.124123  313474 cri.go:89] found id: ""
	I1202 21:16:34.124137  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.124144  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:34.124150  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:34.124209  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:34.149042  313474 cri.go:89] found id: ""
	I1202 21:16:34.149056  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.149063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:34.149069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:34.149127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:34.172796  313474 cri.go:89] found id: ""
	I1202 21:16:34.172810  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.172817  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:34.172823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:34.172888  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:34.199775  313474 cri.go:89] found id: ""
	I1202 21:16:34.199789  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.199796  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:34.199801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:34.199858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:34.223410  313474 cri.go:89] found id: ""
	I1202 21:16:34.223424  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.223431  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:34.223436  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:34.223542  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:34.248663  313474 cri.go:89] found id: ""
	I1202 21:16:34.248677  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.248683  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:34.248689  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:34.248747  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:34.272612  313474 cri.go:89] found id: ""
	I1202 21:16:34.272626  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.272633  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:34.272641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:34.272650  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:34.304889  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:34.304905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:34.363275  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:34.363294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:34.379039  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:34.379054  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:34.446716  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:34.446728  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:34.446739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.010773  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:37.023010  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:37.023081  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:37.074765  313474 cri.go:89] found id: ""
	I1202 21:16:37.074779  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.074786  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:37.074791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:37.074849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:37.105604  313474 cri.go:89] found id: ""
	I1202 21:16:37.105617  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.105624  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:37.105630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:37.105731  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:37.135381  313474 cri.go:89] found id: ""
	I1202 21:16:37.135395  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.135402  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:37.135407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:37.135465  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:37.159378  313474 cri.go:89] found id: ""
	I1202 21:16:37.159391  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.159398  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:37.159404  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:37.159460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:37.184079  313474 cri.go:89] found id: ""
	I1202 21:16:37.184093  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.184100  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:37.184105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:37.184266  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:37.208512  313474 cri.go:89] found id: ""
	I1202 21:16:37.208526  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.208533  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:37.208539  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:37.208598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:37.231722  313474 cri.go:89] found id: ""
	I1202 21:16:37.231735  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.231742  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:37.231750  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:37.231760  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:37.247154  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:37.247171  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:37.311439  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:37.311449  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:37.311459  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.374896  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:37.374916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:37.402545  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:37.402561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:39.959953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:39.969383  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:39.969445  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:39.998437  313474 cri.go:89] found id: ""
	I1202 21:16:39.998450  313474 logs.go:282] 0 containers: []
	W1202 21:16:39.998457  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:39.998463  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:39.998519  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:40.079783  313474 cri.go:89] found id: ""
	I1202 21:16:40.079799  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.079807  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:40.079813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:40.079882  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:40.112177  313474 cri.go:89] found id: ""
	I1202 21:16:40.112203  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.112210  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:40.112217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:40.112289  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:40.148805  313474 cri.go:89] found id: ""
	I1202 21:16:40.148820  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.148828  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:40.148834  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:40.148918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:40.180826  313474 cri.go:89] found id: ""
	I1202 21:16:40.180841  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.180848  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:40.180855  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:40.180930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:40.209004  313474 cri.go:89] found id: ""
	I1202 21:16:40.209018  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.209025  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:40.209032  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:40.209091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:40.234748  313474 cri.go:89] found id: ""
	I1202 21:16:40.234762  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.234769  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:40.234778  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:40.234788  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:40.297246  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:40.297257  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:40.297268  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:40.359276  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:40.359297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:40.389165  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:40.389181  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:40.447977  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:40.447997  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:42.964946  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:42.974927  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:42.974987  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:42.997720  313474 cri.go:89] found id: ""
	I1202 21:16:42.997734  313474 logs.go:282] 0 containers: []
	W1202 21:16:42.997741  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:42.997747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:42.997808  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:43.022947  313474 cri.go:89] found id: ""
	I1202 21:16:43.022961  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.022968  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:43.022973  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:43.023034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:43.053855  313474 cri.go:89] found id: ""
	I1202 21:16:43.053869  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.053876  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:43.053881  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:43.053941  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:43.086462  313474 cri.go:89] found id: ""
	I1202 21:16:43.086475  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.086482  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:43.086487  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:43.086545  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:43.112776  313474 cri.go:89] found id: ""
	I1202 21:16:43.112790  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.112798  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:43.112803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:43.112861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:43.137549  313474 cri.go:89] found id: ""
	I1202 21:16:43.137563  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.137570  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:43.137576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:43.137695  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:43.161710  313474 cri.go:89] found id: ""
	I1202 21:16:43.161724  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.161731  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:43.161739  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:43.161751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:43.217891  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:43.217910  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:43.233516  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:43.233539  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:43.295127  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:43.295145  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:43.295157  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:43.361614  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:43.361638  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:45.891122  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:45.901162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:45.901219  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:45.924968  313474 cri.go:89] found id: ""
	I1202 21:16:45.924982  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.924989  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:45.924994  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:45.925064  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:45.960327  313474 cri.go:89] found id: ""
	I1202 21:16:45.960350  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.960357  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:45.960362  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:45.960428  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:45.988303  313474 cri.go:89] found id: ""
	I1202 21:16:45.988317  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.988324  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:45.988330  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:45.988395  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:46.015569  313474 cri.go:89] found id: ""
	I1202 21:16:46.015582  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.015590  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:46.015595  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:46.015656  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:46.042481  313474 cri.go:89] found id: ""
	I1202 21:16:46.042494  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.042511  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:46.042517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:46.042583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:46.076870  313474 cri.go:89] found id: ""
	I1202 21:16:46.076910  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.076918  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:46.076924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:46.076995  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:46.110449  313474 cri.go:89] found id: ""
	I1202 21:16:46.110490  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.110498  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:46.110514  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:46.110525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:46.188559  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:46.188579  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:46.188590  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:46.253578  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:46.253598  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:46.281754  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:46.281771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:46.338833  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:46.338850  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:48.855152  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:48.865294  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:48.865357  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:48.889825  313474 cri.go:89] found id: ""
	I1202 21:16:48.889839  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.889846  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:48.889852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:48.889911  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:48.913688  313474 cri.go:89] found id: ""
	I1202 21:16:48.913705  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.913712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:48.913718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:48.913781  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:48.937742  313474 cri.go:89] found id: ""
	I1202 21:16:48.937756  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.937763  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:48.937779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:48.937837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:48.961294  313474 cri.go:89] found id: ""
	I1202 21:16:48.961308  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.961315  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:48.961320  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:48.961378  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:48.985846  313474 cri.go:89] found id: ""
	I1202 21:16:48.985860  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.985866  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:48.985872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:48.985930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:49.014392  313474 cri.go:89] found id: ""
	I1202 21:16:49.014405  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.014412  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:49.014418  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:49.014478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:49.038987  313474 cri.go:89] found id: ""
	I1202 21:16:49.039000  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.039006  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:49.039014  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:49.039024  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:49.102227  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:49.102246  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:49.120563  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:49.120579  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:49.183266  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:49.183286  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:49.183297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:49.246439  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:49.246458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:51.775321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:51.785184  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:51.785254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:51.809810  313474 cri.go:89] found id: ""
	I1202 21:16:51.809824  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.809831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:51.809837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:51.809900  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:51.835767  313474 cri.go:89] found id: ""
	I1202 21:16:51.835795  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.835802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:51.835808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:51.835866  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:51.865885  313474 cri.go:89] found id: ""
	I1202 21:16:51.865900  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.865914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:51.865920  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:51.865980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:51.891809  313474 cri.go:89] found id: ""
	I1202 21:16:51.891823  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.891831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:51.891837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:51.891898  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:51.916253  313474 cri.go:89] found id: ""
	I1202 21:16:51.916267  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.916274  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:51.916280  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:51.916349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:51.941007  313474 cri.go:89] found id: ""
	I1202 21:16:51.941021  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.941028  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:51.941034  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:51.941093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:51.969353  313474 cri.go:89] found id: ""
	I1202 21:16:51.969368  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.969375  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:51.969382  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:51.969393  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:52.025261  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:52.025287  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:52.045534  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:52.045551  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:52.124972  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:52.124982  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:52.124993  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:52.189351  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:52.189372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:54.721393  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:54.732232  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:54.732290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:54.757594  313474 cri.go:89] found id: ""
	I1202 21:16:54.757608  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.757630  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:54.757671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:54.757734  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:54.783381  313474 cri.go:89] found id: ""
	I1202 21:16:54.783395  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.783402  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:54.783407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:54.783480  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:54.808177  313474 cri.go:89] found id: ""
	I1202 21:16:54.808198  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.808205  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:54.808211  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:54.808291  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:54.831293  313474 cri.go:89] found id: ""
	I1202 21:16:54.831307  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.831314  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:54.831331  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:54.831399  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:54.854343  313474 cri.go:89] found id: ""
	I1202 21:16:54.854357  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.854363  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:54.854368  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:54.854427  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:54.882636  313474 cri.go:89] found id: ""
	I1202 21:16:54.882650  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.882667  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:54.882673  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:54.882739  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:54.911098  313474 cri.go:89] found id: ""
	I1202 21:16:54.911112  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.911120  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:54.911128  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:54.911138  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:54.970728  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:54.970746  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:54.986382  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:54.986399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:55.069421  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:55.069437  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:55.069448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:55.151228  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:55.151266  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:57.687319  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:57.696959  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:57.697017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:57.720719  313474 cri.go:89] found id: ""
	I1202 21:16:57.720733  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.720740  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:57.720746  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:57.720811  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:57.749778  313474 cri.go:89] found id: ""
	I1202 21:16:57.749792  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.749800  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:57.749805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:57.749863  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:57.772871  313474 cri.go:89] found id: ""
	I1202 21:16:57.772884  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.772891  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:57.772896  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:57.772954  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:57.799916  313474 cri.go:89] found id: ""
	I1202 21:16:57.799931  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.799937  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:57.799943  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:57.800000  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:57.827165  313474 cri.go:89] found id: ""
	I1202 21:16:57.827179  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.827186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:57.827191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:57.827248  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:57.852136  313474 cri.go:89] found id: ""
	I1202 21:16:57.852150  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.852157  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:57.852166  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:57.852222  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:57.876624  313474 cri.go:89] found id: ""
	I1202 21:16:57.876638  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.876645  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:57.876654  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:57.876664  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:57.940462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:57.940473  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:57.940483  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:58.004519  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:58.004544  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:58.036463  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:58.036479  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:58.096205  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:58.096223  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.618984  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:00.629839  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:00.629906  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:00.661470  313474 cri.go:89] found id: ""
	I1202 21:17:00.661490  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.661498  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:00.661505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:00.661578  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:00.689166  313474 cri.go:89] found id: ""
	I1202 21:17:00.689182  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.689189  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:00.689202  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:00.689273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:00.716048  313474 cri.go:89] found id: ""
	I1202 21:17:00.716063  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.716070  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:00.716076  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:00.716143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:00.748003  313474 cri.go:89] found id: ""
	I1202 21:17:00.748017  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.748025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:00.748030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:00.748093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:00.779207  313474 cri.go:89] found id: ""
	I1202 21:17:00.779223  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.779231  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:00.779238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:00.779312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:00.805166  313474 cri.go:89] found id: ""
	I1202 21:17:00.805184  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.805194  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:00.805200  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:00.805273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:00.832311  313474 cri.go:89] found id: ""
	I1202 21:17:00.832326  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.832333  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:00.832342  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:00.832352  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:00.889599  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:00.889625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.906214  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:00.906230  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:00.978709  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:00.978720  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:00.978734  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:01.044083  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:01.044105  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.609427  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:03.620657  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:03.620726  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:03.651829  313474 cri.go:89] found id: ""
	I1202 21:17:03.651844  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.651851  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:03.651857  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:03.651923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:03.678868  313474 cri.go:89] found id: ""
	I1202 21:17:03.678889  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.678896  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:03.678902  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:03.678969  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:03.708792  313474 cri.go:89] found id: ""
	I1202 21:17:03.708806  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.708814  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:03.708820  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:03.708883  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:03.738501  313474 cri.go:89] found id: ""
	I1202 21:17:03.738516  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.738524  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:03.738531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:03.738604  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:03.770026  313474 cri.go:89] found id: ""
	I1202 21:17:03.770050  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.770057  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:03.770063  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:03.770127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:03.804285  313474 cri.go:89] found id: ""
	I1202 21:17:03.804300  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.804308  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:03.804324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:03.804391  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:03.831572  313474 cri.go:89] found id: ""
	I1202 21:17:03.831587  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.831594  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:03.831602  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:03.831613  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.860060  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:03.860086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:03.921719  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:03.921744  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:03.939033  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:03.939051  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:04.010810  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:04.010823  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:04.010835  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:06.576791  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:06.587693  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:06.587761  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:06.614477  313474 cri.go:89] found id: ""
	I1202 21:17:06.614493  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.614500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:06.614506  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:06.614571  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:06.641625  313474 cri.go:89] found id: ""
	I1202 21:17:06.641639  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.641646  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:06.641670  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:06.641735  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:06.667567  313474 cri.go:89] found id: ""
	I1202 21:17:06.667581  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.667588  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:06.667594  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:06.667657  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:06.694684  313474 cri.go:89] found id: ""
	I1202 21:17:06.694699  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.694706  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:06.694711  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:06.694777  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:06.723071  313474 cri.go:89] found id: ""
	I1202 21:17:06.723090  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.723097  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:06.723103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:06.723185  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:06.751448  313474 cri.go:89] found id: ""
	I1202 21:17:06.751462  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.751469  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:06.751476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:06.751544  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:06.781674  313474 cri.go:89] found id: ""
	I1202 21:17:06.781689  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.781697  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:06.781705  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:06.781723  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:06.812650  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:06.812669  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:06.874390  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:06.874410  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:06.891708  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:06.891726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:06.960203  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:06.960213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:06.960225  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:09.527222  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:09.537303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:09.537380  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:09.562091  313474 cri.go:89] found id: ""
	I1202 21:17:09.562112  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.562120  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:09.562125  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:09.562188  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:09.587772  313474 cri.go:89] found id: ""
	I1202 21:17:09.587786  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.587802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:09.587808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:09.587876  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:09.613205  313474 cri.go:89] found id: ""
	I1202 21:17:09.613224  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.613232  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:09.613238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:09.613298  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:09.639556  313474 cri.go:89] found id: ""
	I1202 21:17:09.639570  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.639577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:09.639583  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:09.639648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:09.668717  313474 cri.go:89] found id: ""
	I1202 21:17:09.668731  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.668737  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:09.668743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:09.668800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:09.692671  313474 cri.go:89] found id: ""
	I1202 21:17:09.692685  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.692693  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:09.692698  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:09.692756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:09.717454  313474 cri.go:89] found id: ""
	I1202 21:17:09.717468  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.717475  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:09.717484  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:09.717494  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:09.747114  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:09.747130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:09.803274  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:09.803294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:09.819246  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:09.819264  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:09.879465  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:09.879474  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:09.879485  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.443298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:12.453026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:12.453087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:12.479471  313474 cri.go:89] found id: ""
	I1202 21:17:12.479485  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.479492  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:12.479498  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:12.479559  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:12.503554  313474 cri.go:89] found id: ""
	I1202 21:17:12.503567  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.503575  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:12.503580  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:12.503637  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:12.528839  313474 cri.go:89] found id: ""
	I1202 21:17:12.528854  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.528861  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:12.528866  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:12.528943  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:12.553622  313474 cri.go:89] found id: ""
	I1202 21:17:12.553644  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.553663  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:12.553669  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:12.553737  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:12.579503  313474 cri.go:89] found id: ""
	I1202 21:17:12.579516  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.579523  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:12.579528  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:12.579583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:12.612312  313474 cri.go:89] found id: ""
	I1202 21:17:12.612327  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.612334  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:12.612339  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:12.612413  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:12.636613  313474 cri.go:89] found id: ""
	I1202 21:17:12.636628  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.636635  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:12.636642  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:12.636652  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:12.696881  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:12.696892  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:12.696903  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.758877  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:12.758898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:12.786233  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:12.786249  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:12.841290  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:12.841308  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.357945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:15.367765  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:15.367824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:15.395603  313474 cri.go:89] found id: ""
	I1202 21:17:15.395617  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.395624  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:15.395629  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:15.395688  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:15.418671  313474 cri.go:89] found id: ""
	I1202 21:17:15.418684  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.418691  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:15.418705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:15.418763  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:15.442594  313474 cri.go:89] found id: ""
	I1202 21:17:15.442607  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.442615  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:15.442624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:15.442680  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:15.466331  313474 cri.go:89] found id: ""
	I1202 21:17:15.466345  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.466352  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:15.466357  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:15.466416  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:15.491762  313474 cri.go:89] found id: ""
	I1202 21:17:15.491775  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.491782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:15.491788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:15.491847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:15.517473  313474 cri.go:89] found id: ""
	I1202 21:17:15.517487  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.517503  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:15.517509  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:15.517577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:15.544100  313474 cri.go:89] found id: ""
	I1202 21:17:15.544122  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.544129  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:15.544138  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:15.544148  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:15.570436  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:15.570453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:15.625879  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:15.625897  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.641070  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:15.641091  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:15.704897  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:15.704906  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:15.704916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:18.272120  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:18.282411  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:18.282474  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:18.310144  313474 cri.go:89] found id: ""
	I1202 21:17:18.310158  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.310165  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:18.310170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:18.310230  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:18.339624  313474 cri.go:89] found id: ""
	I1202 21:17:18.339637  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.339645  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:18.339650  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:18.339709  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:18.367236  313474 cri.go:89] found id: ""
	I1202 21:17:18.367252  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.367259  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:18.367265  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:18.367323  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:18.391197  313474 cri.go:89] found id: ""
	I1202 21:17:18.391213  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.391220  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:18.391226  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:18.391285  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:18.419753  313474 cri.go:89] found id: ""
	I1202 21:17:18.419768  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.419775  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:18.419780  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:18.419841  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:18.445569  313474 cri.go:89] found id: ""
	I1202 21:17:18.445588  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.445596  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:18.445601  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:18.445689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:18.471844  313474 cri.go:89] found id: ""
	I1202 21:17:18.471858  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.471865  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:18.471882  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:18.471893  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:18.500607  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:18.500623  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:18.556521  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:18.556540  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:18.572100  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:18.572115  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:18.637389  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:18.637399  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:18.637419  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:21.200861  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:21.210744  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:21.210815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:21.235330  313474 cri.go:89] found id: ""
	I1202 21:17:21.235344  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.235351  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:21.235356  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:21.235412  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:21.263273  313474 cri.go:89] found id: ""
	I1202 21:17:21.263287  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.263294  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:21.263299  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:21.263358  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:21.295429  313474 cri.go:89] found id: ""
	I1202 21:17:21.295443  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.295450  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:21.295455  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:21.295522  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:21.339988  313474 cri.go:89] found id: ""
	I1202 21:17:21.340017  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.340025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:21.340031  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:21.340094  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:21.366146  313474 cri.go:89] found id: ""
	I1202 21:17:21.366159  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.366166  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:21.366171  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:21.366234  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:21.396896  313474 cri.go:89] found id: ""
	I1202 21:17:21.396910  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.396917  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:21.396922  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:21.396980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:21.424236  313474 cri.go:89] found id: ""
	I1202 21:17:21.424249  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.424256  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:21.424273  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:21.424284  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:21.452897  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:21.452913  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:21.511384  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:21.511402  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:21.527095  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:21.527121  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:21.587938  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:21.587948  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:21.587958  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:24.156062  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:24.166297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:24.166383  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:24.194537  313474 cri.go:89] found id: ""
	I1202 21:17:24.194550  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.194558  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:24.194564  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:24.194624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:24.218699  313474 cri.go:89] found id: ""
	I1202 21:17:24.218714  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.218728  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:24.218734  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:24.218796  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:24.244266  313474 cri.go:89] found id: ""
	I1202 21:17:24.244280  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.244287  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:24.244292  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:24.244352  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:24.269104  313474 cri.go:89] found id: ""
	I1202 21:17:24.269117  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.269124  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:24.269129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:24.269186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:24.296650  313474 cri.go:89] found id: ""
	I1202 21:17:24.296663  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.296671  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:24.296677  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:24.296745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:24.323551  313474 cri.go:89] found id: ""
	I1202 21:17:24.323564  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.323572  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:24.323579  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:24.323648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:24.353085  313474 cri.go:89] found id: ""
	I1202 21:17:24.353109  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.353117  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:24.353126  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:24.353136  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:24.382045  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:24.382062  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:24.438756  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:24.438773  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:24.454650  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:24.454665  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:24.517340  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:24.517351  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:24.517371  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.081832  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:27.091605  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:27.091662  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:27.116713  313474 cri.go:89] found id: ""
	I1202 21:17:27.116726  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.116734  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:27.116739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:27.116801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:27.140809  313474 cri.go:89] found id: ""
	I1202 21:17:27.140823  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.140830  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:27.140835  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:27.140918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:27.167221  313474 cri.go:89] found id: ""
	I1202 21:17:27.167235  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.167242  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:27.167247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:27.167302  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:27.191660  313474 cri.go:89] found id: ""
	I1202 21:17:27.191674  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.191681  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:27.191686  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:27.191755  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:27.219696  313474 cri.go:89] found id: ""
	I1202 21:17:27.219719  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.219727  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:27.219732  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:27.219801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:27.247486  313474 cri.go:89] found id: ""
	I1202 21:17:27.247499  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.247506  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:27.247512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:27.247572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:27.270666  313474 cri.go:89] found id: ""
	I1202 21:17:27.270679  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.270687  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:27.270695  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:27.270704  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:27.329329  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:27.329349  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:27.350719  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:27.350735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:27.420274  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:27.420285  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:27.420338  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.487442  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:27.487462  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:30.014027  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:30.043373  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:30.043450  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:30.070998  313474 cri.go:89] found id: ""
	I1202 21:17:30.071012  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.071020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:30.071026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:30.071090  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:30.100616  313474 cri.go:89] found id: ""
	I1202 21:17:30.100630  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.100643  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:30.100649  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:30.100710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:30.130598  313474 cri.go:89] found id: ""
	I1202 21:17:30.130612  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.130620  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:30.130626  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:30.130687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:30.157465  313474 cri.go:89] found id: ""
	I1202 21:17:30.157479  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.157486  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:30.157492  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:30.157550  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:30.182842  313474 cri.go:89] found id: ""
	I1202 21:17:30.182857  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.182864  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:30.182870  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:30.182930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:30.211948  313474 cri.go:89] found id: ""
	I1202 21:17:30.211962  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.211969  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:30.211975  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:30.212034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:30.240992  313474 cri.go:89] found id: ""
	I1202 21:17:30.241006  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.241013  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:30.241020  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:30.241031  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:30.296604  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:30.296621  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:30.314431  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:30.314447  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:30.385351  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:30.385362  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:30.385372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:30.451748  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:30.451771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:32.983767  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:32.993977  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:32.994037  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:33.020180  313474 cri.go:89] found id: ""
	I1202 21:17:33.020195  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.020202  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:33.020208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:33.020280  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:33.048366  313474 cri.go:89] found id: ""
	I1202 21:17:33.048379  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.048386  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:33.048392  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:33.048453  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:33.075220  313474 cri.go:89] found id: ""
	I1202 21:17:33.075240  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.075247  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:33.075253  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:33.075326  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:33.099808  313474 cri.go:89] found id: ""
	I1202 21:17:33.099823  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.099831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:33.099837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:33.099897  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:33.124213  313474 cri.go:89] found id: ""
	I1202 21:17:33.124226  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.124233  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:33.124239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:33.124297  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:33.150102  313474 cri.go:89] found id: ""
	I1202 21:17:33.150116  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.150123  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:33.150129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:33.150190  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:33.174754  313474 cri.go:89] found id: ""
	I1202 21:17:33.174768  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.174775  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:33.174784  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:33.174794  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:33.243781  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:33.243791  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:33.243802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:33.306573  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:33.306592  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:33.336859  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:33.336876  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:33.398386  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:33.398404  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:35.914658  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:35.924718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:35.924778  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:35.950094  313474 cri.go:89] found id: ""
	I1202 21:17:35.950108  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.950114  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:35.950120  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:35.950182  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:35.974633  313474 cri.go:89] found id: ""
	I1202 21:17:35.974647  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.974654  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:35.974660  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:35.974719  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:35.998845  313474 cri.go:89] found id: ""
	I1202 21:17:35.998859  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.998866  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:35.998872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:35.998933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:36.027158  313474 cri.go:89] found id: ""
	I1202 21:17:36.027173  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.027186  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:36.027192  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:36.027259  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:36.052916  313474 cri.go:89] found id: ""
	I1202 21:17:36.052930  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.052937  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:36.052942  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:36.053002  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:36.078331  313474 cri.go:89] found id: ""
	I1202 21:17:36.078345  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.078353  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:36.078359  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:36.078421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:36.102917  313474 cri.go:89] found id: ""
	I1202 21:17:36.102935  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.102942  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:36.102952  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:36.102968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:36.170369  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:36.170381  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:36.170396  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:36.233123  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:36.233141  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:36.260318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:36.260336  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:36.318506  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:36.318525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:38.836941  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:38.847151  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:38.847224  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:38.875586  313474 cri.go:89] found id: ""
	I1202 21:17:38.875599  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.875606  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:38.875612  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:38.875671  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:38.898500  313474 cri.go:89] found id: ""
	I1202 21:17:38.898514  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.898530  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:38.898538  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:38.898601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:38.922709  313474 cri.go:89] found id: ""
	I1202 21:17:38.922723  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.922730  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:38.922735  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:38.922791  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:38.950687  313474 cri.go:89] found id: ""
	I1202 21:17:38.950701  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.950717  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:38.950723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:38.950789  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:38.973477  313474 cri.go:89] found id: ""
	I1202 21:17:38.973490  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.973506  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:38.973514  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:38.973590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:38.999179  313474 cri.go:89] found id: ""
	I1202 21:17:38.999193  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.999200  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:38.999206  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:38.999264  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:39.028981  313474 cri.go:89] found id: ""
	I1202 21:17:39.028995  313474 logs.go:282] 0 containers: []
	W1202 21:17:39.029002  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:39.029010  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:39.029019  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:39.091914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:39.091935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:39.118017  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:39.118033  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:39.174784  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:39.174803  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:39.190239  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:39.190254  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:39.253019  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:41.753253  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:41.763094  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:41.763167  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:41.787441  313474 cri.go:89] found id: ""
	I1202 21:17:41.787457  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.787464  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:41.787470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:41.787529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:41.815733  313474 cri.go:89] found id: ""
	I1202 21:17:41.815746  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.815753  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:41.815759  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:41.815819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:41.839039  313474 cri.go:89] found id: ""
	I1202 21:17:41.839053  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.839060  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:41.839065  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:41.839125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:41.867760  313474 cri.go:89] found id: ""
	I1202 21:17:41.867775  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.867783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:41.867796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:41.867860  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:41.894114  313474 cri.go:89] found id: ""
	I1202 21:17:41.894128  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.894135  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:41.894141  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:41.894202  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:41.918156  313474 cri.go:89] found id: ""
	I1202 21:17:41.918169  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.918177  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:41.918182  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:41.918242  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:41.942010  313474 cri.go:89] found id: ""
	I1202 21:17:41.942024  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.942032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:41.942040  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:41.942050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:41.971871  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:41.971886  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:42.031586  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:42.031606  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:42.050658  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:42.050675  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:42.125237  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:42.125249  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:42.125260  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:44.696530  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:44.706544  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:44.706605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:44.734450  313474 cri.go:89] found id: ""
	I1202 21:17:44.734464  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.734470  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:44.734476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:44.734535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:44.758091  313474 cri.go:89] found id: ""
	I1202 21:17:44.758104  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.758111  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:44.758116  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:44.758178  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:44.782611  313474 cri.go:89] found id: ""
	I1202 21:17:44.782624  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.782631  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:44.782637  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:44.782700  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:44.806667  313474 cri.go:89] found id: ""
	I1202 21:17:44.806681  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.806689  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:44.806695  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:44.806757  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:44.830007  313474 cri.go:89] found id: ""
	I1202 21:17:44.830021  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.830031  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:44.830036  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:44.830098  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:44.853880  313474 cri.go:89] found id: ""
	I1202 21:17:44.853894  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.853901  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:44.853907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:44.853970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:44.878619  313474 cri.go:89] found id: ""
	I1202 21:17:44.878633  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.878640  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:44.878647  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:44.878657  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:44.894269  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:44.894286  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:44.959621  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:44.959632  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:44.959645  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:45.023289  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:45.023311  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:45.085458  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:45.085476  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.687794  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:47.697486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:47.697557  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:47.723246  313474 cri.go:89] found id: ""
	I1202 21:17:47.723259  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.723266  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:47.723272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:47.723329  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:47.746713  313474 cri.go:89] found id: ""
	I1202 21:17:47.746726  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.746733  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:47.746739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:47.746798  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:47.771766  313474 cri.go:89] found id: ""
	I1202 21:17:47.771779  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.771786  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:47.771791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:47.771847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:47.795263  313474 cri.go:89] found id: ""
	I1202 21:17:47.795277  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.795284  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:47.795289  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:47.795349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:47.824522  313474 cri.go:89] found id: ""
	I1202 21:17:47.824536  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.824543  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:47.824548  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:47.824610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:47.849074  313474 cri.go:89] found id: ""
	I1202 21:17:47.849089  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.849096  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:47.849102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:47.849163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:47.878497  313474 cri.go:89] found id: ""
	I1202 21:17:47.878512  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.878518  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:47.878526  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:47.878537  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.935644  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:47.935663  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:47.951723  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:47.951739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:48.020401  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:48.020422  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:48.020434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:48.090722  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:48.090751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.621799  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:50.631705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:50.631774  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:50.656209  313474 cri.go:89] found id: ""
	I1202 21:17:50.656223  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.656230  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:50.656235  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:50.656300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:50.680929  313474 cri.go:89] found id: ""
	I1202 21:17:50.680943  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.680950  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:50.680955  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:50.681014  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:50.705769  313474 cri.go:89] found id: ""
	I1202 21:17:50.705783  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.705790  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:50.705796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:50.705858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:50.731506  313474 cri.go:89] found id: ""
	I1202 21:17:50.731519  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.731526  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:50.731531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:50.731588  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:50.754334  313474 cri.go:89] found id: ""
	I1202 21:17:50.754347  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.754354  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:50.754360  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:50.754421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:50.778142  313474 cri.go:89] found id: ""
	I1202 21:17:50.778154  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.778162  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:50.778170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:50.778228  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:50.801859  313474 cri.go:89] found id: ""
	I1202 21:17:50.801872  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.801880  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:50.801887  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:50.801898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:50.862528  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:50.862542  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:50.862553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:50.928955  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:50.928974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.960442  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:50.960458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:51.018671  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:51.018690  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.535533  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:53.550193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:53.550254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:53.579796  313474 cri.go:89] found id: ""
	I1202 21:17:53.579810  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.579817  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:53.579823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:53.579885  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:53.606043  313474 cri.go:89] found id: ""
	I1202 21:17:53.606057  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.606063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:53.606069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:53.606125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:53.631276  313474 cri.go:89] found id: ""
	I1202 21:17:53.631290  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.631297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:53.631303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:53.631360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:53.662387  313474 cri.go:89] found id: ""
	I1202 21:17:53.662400  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.662407  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:53.662412  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:53.662467  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:53.686744  313474 cri.go:89] found id: ""
	I1202 21:17:53.686758  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.686765  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:53.686771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:53.686832  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:53.710015  313474 cri.go:89] found id: ""
	I1202 21:17:53.710028  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.710035  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:53.710046  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:53.710102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:53.733042  313474 cri.go:89] found id: ""
	I1202 21:17:53.733056  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.733068  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:53.733076  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:53.733088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:53.789666  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:53.789726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.805097  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:53.805113  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:53.871790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:53.871801  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:53.871813  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:53.935260  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:53.935279  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:56.466348  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:56.476763  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:56.476830  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:56.501775  313474 cri.go:89] found id: ""
	I1202 21:17:56.501789  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.501795  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:56.501801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:56.501861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:56.526404  313474 cri.go:89] found id: ""
	I1202 21:17:56.526417  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.526424  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:56.526429  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:56.526487  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:56.555809  313474 cri.go:89] found id: ""
	I1202 21:17:56.555823  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.555845  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:56.555852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:56.555923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:56.586754  313474 cri.go:89] found id: ""
	I1202 21:17:56.586767  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.586794  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:56.586803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:56.586871  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:56.612048  313474 cri.go:89] found id: ""
	I1202 21:17:56.612061  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.612068  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:56.612074  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:56.612134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:56.636363  313474 cri.go:89] found id: ""
	I1202 21:17:56.636376  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.636383  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:56.636399  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:56.636456  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:56.668372  313474 cri.go:89] found id: ""
	I1202 21:17:56.668393  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.668400  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:56.668409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:56.668418  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:56.724439  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:56.724458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:56.740142  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:56.740161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:56.802960  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:56.802970  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:56.802981  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:56.870497  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:56.870516  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.400859  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:59.410723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:59.410792  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:59.434739  313474 cri.go:89] found id: ""
	I1202 21:17:59.434754  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.434761  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:59.434766  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:59.434823  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:59.459718  313474 cri.go:89] found id: ""
	I1202 21:17:59.459731  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.459738  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:59.459743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:59.459800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:59.484078  313474 cri.go:89] found id: ""
	I1202 21:17:59.484091  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.484098  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:59.484103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:59.484161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:59.510484  313474 cri.go:89] found id: ""
	I1202 21:17:59.510498  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.510505  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:59.510510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:59.510569  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:59.535191  313474 cri.go:89] found id: ""
	I1202 21:17:59.535204  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.535211  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:59.535217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:59.535278  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:59.566496  313474 cri.go:89] found id: ""
	I1202 21:17:59.566509  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.566516  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:59.566522  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:59.566591  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:59.605449  313474 cri.go:89] found id: ""
	I1202 21:17:59.605463  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.605470  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:59.605479  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:59.605492  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:59.670641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:59.670659  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.698362  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:59.698378  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:59.755057  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:59.755075  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:59.771334  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:59.771350  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:59.833359  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.334350  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:02.344576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:02.344646  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:02.372330  313474 cri.go:89] found id: ""
	I1202 21:18:02.372347  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.372355  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:02.372361  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:02.372421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:02.403621  313474 cri.go:89] found id: ""
	I1202 21:18:02.403635  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.403642  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:02.403648  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:02.403710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:02.432672  313474 cri.go:89] found id: ""
	I1202 21:18:02.432686  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.432693  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:02.432700  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:02.432762  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:02.464631  313474 cri.go:89] found id: ""
	I1202 21:18:02.464645  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.464652  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:02.464658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:02.464720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:02.491546  313474 cri.go:89] found id: ""
	I1202 21:18:02.491559  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.491566  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:02.491572  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:02.491628  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:02.515275  313474 cri.go:89] found id: ""
	I1202 21:18:02.515289  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.515296  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:02.515301  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:02.515361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:02.542560  313474 cri.go:89] found id: ""
	I1202 21:18:02.542574  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.542581  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:02.542589  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:02.542599  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:02.602107  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:02.602123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:02.624739  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:02.624757  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:02.689790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.689808  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:02.689819  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:02.752499  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:02.752518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.283528  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:05.293718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:05.293787  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:05.317745  313474 cri.go:89] found id: ""
	I1202 21:18:05.317758  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.317764  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:05.317770  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:05.317825  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:05.342721  313474 cri.go:89] found id: ""
	I1202 21:18:05.342735  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.342742  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:05.342747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:05.342805  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:05.367273  313474 cri.go:89] found id: ""
	I1202 21:18:05.367295  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.367303  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:05.367311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:05.367374  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:05.392617  313474 cri.go:89] found id: ""
	I1202 21:18:05.392630  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.392639  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:05.392644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:05.392720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:05.416853  313474 cri.go:89] found id: ""
	I1202 21:18:05.416866  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.416873  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:05.416878  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:05.416939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:05.440831  313474 cri.go:89] found id: ""
	I1202 21:18:05.440845  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.440852  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:05.440858  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:05.440925  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:05.468689  313474 cri.go:89] found id: ""
	I1202 21:18:05.468702  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.468709  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:05.468718  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:05.468728  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:05.532922  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:05.532931  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:05.532956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:05.603067  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:05.603086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.634107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:05.634125  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:05.690509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:05.690527  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.208420  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:08.218671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:08.218745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:08.244809  313474 cri.go:89] found id: ""
	I1202 21:18:08.244823  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.244831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:08.244837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:08.244895  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:08.270054  313474 cri.go:89] found id: ""
	I1202 21:18:08.270068  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.270075  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:08.270080  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:08.270145  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:08.295277  313474 cri.go:89] found id: ""
	I1202 21:18:08.295291  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.295298  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:08.295304  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:08.295366  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:08.319112  313474 cri.go:89] found id: ""
	I1202 21:18:08.319125  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.319132  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:08.319138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:08.319205  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:08.342874  313474 cri.go:89] found id: ""
	I1202 21:18:08.342888  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.342901  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:08.342908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:08.342965  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:08.371370  313474 cri.go:89] found id: ""
	I1202 21:18:08.371384  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.371391  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:08.371397  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:08.371464  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:08.396154  313474 cri.go:89] found id: ""
	I1202 21:18:08.396167  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.396175  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:08.396183  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:08.396193  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:08.451337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:08.451356  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.466550  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:08.466565  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:08.528549  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:08.528558  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:08.528569  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:08.606008  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:08.606028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:11.138262  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.148937  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:11.148998  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:11.173696  313474 cri.go:89] found id: ""
	I1202 21:18:11.173710  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.173718  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:11.173723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:11.173790  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:11.198792  313474 cri.go:89] found id: ""
	I1202 21:18:11.198805  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.198813  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:11.198818  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:11.198880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:11.222802  313474 cri.go:89] found id: ""
	I1202 21:18:11.222816  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.222823  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:11.222829  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:11.222890  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:11.247731  313474 cri.go:89] found id: ""
	I1202 21:18:11.247745  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.247752  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:11.247757  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:11.247814  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:11.272133  313474 cri.go:89] found id: ""
	I1202 21:18:11.272146  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.272153  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:11.272159  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:11.272217  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:11.296871  313474 cri.go:89] found id: ""
	I1202 21:18:11.296885  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.296892  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:11.296897  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:11.296958  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:11.321716  313474 cri.go:89] found id: ""
	I1202 21:18:11.321729  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.321736  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:11.321744  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:11.321754  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:11.377048  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:11.377066  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:11.393570  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:11.393587  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:11.458188  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:11.458204  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:11.458220  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:11.525584  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:11.525602  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.058201  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:14.068731  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:14.068793  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:14.095660  313474 cri.go:89] found id: ""
	I1202 21:18:14.095674  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.095682  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:14.095688  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:14.095754  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:14.122077  313474 cri.go:89] found id: ""
	I1202 21:18:14.122090  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.122097  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:14.122102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:14.122163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:14.150178  313474 cri.go:89] found id: ""
	I1202 21:18:14.150192  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.150199  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:14.150204  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:14.150265  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:14.175340  313474 cri.go:89] found id: ""
	I1202 21:18:14.175353  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.175360  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:14.175372  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:14.175431  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:14.199105  313474 cri.go:89] found id: ""
	I1202 21:18:14.199118  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.199125  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:14.199130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:14.199187  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:14.224274  313474 cri.go:89] found id: ""
	I1202 21:18:14.224288  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.224295  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:14.224300  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:14.224363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:14.251445  313474 cri.go:89] found id: ""
	I1202 21:18:14.251458  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.251465  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:14.251473  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:14.251487  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:14.320250  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:14.320261  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:14.320274  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:14.383255  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:14.383276  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.411409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:14.411425  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:14.472223  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:14.472248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:16.989804  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:17.000093  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:17.000155  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:17.028092  313474 cri.go:89] found id: ""
	I1202 21:18:17.028116  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.028124  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:17.028130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:17.028198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:17.052924  313474 cri.go:89] found id: ""
	I1202 21:18:17.052945  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.052952  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:17.052958  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:17.053029  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:17.078703  313474 cri.go:89] found id: ""
	I1202 21:18:17.078727  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.078734  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:17.078742  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:17.078812  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:17.104168  313474 cri.go:89] found id: ""
	I1202 21:18:17.104182  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.104189  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:17.104195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:17.104299  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:17.127996  313474 cri.go:89] found id: ""
	I1202 21:18:17.128010  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.128017  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:17.128023  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:17.128088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:17.152013  313474 cri.go:89] found id: ""
	I1202 21:18:17.152027  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.152034  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:17.152040  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:17.152100  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:17.180838  313474 cri.go:89] found id: ""
	I1202 21:18:17.180853  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.180860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:17.180868  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:17.180878  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:17.208724  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:17.208740  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:17.264017  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:17.264035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:17.280767  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:17.280783  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:17.347738  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:17.347749  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:17.347762  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:19.913786  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:19.923690  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:19.923756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:19.948485  313474 cri.go:89] found id: ""
	I1202 21:18:19.948499  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.948506  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:19.948512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:19.948572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:19.973040  313474 cri.go:89] found id: ""
	I1202 21:18:19.973054  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.973062  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:19.973067  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:19.973129  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:19.997059  313474 cri.go:89] found id: ""
	I1202 21:18:19.997073  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.997080  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:19.997086  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:19.997143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:20.023852  313474 cri.go:89] found id: ""
	I1202 21:18:20.023868  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.023876  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:20.023882  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:20.023963  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:20.050761  313474 cri.go:89] found id: ""
	I1202 21:18:20.050775  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.050782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:20.050788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:20.050849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:20.080281  313474 cri.go:89] found id: ""
	I1202 21:18:20.080299  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.080318  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:20.080324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:20.080396  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:20.104993  313474 cri.go:89] found id: ""
	I1202 21:18:20.105008  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.105015  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:20.105024  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:20.105035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:20.165434  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:20.165453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:20.181890  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:20.181907  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:20.248978  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:20.248989  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:20.249000  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:20.310960  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:20.310980  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:22.840884  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:22.851984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:22.852053  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:22.877753  313474 cri.go:89] found id: ""
	I1202 21:18:22.877766  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.877773  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:22.877779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:22.877837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:22.906410  313474 cri.go:89] found id: ""
	I1202 21:18:22.906424  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.906431  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:22.906437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:22.906500  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:22.930057  313474 cri.go:89] found id: ""
	I1202 21:18:22.930071  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.930077  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:22.930083  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:22.930143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:22.953434  313474 cri.go:89] found id: ""
	I1202 21:18:22.953447  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.953454  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:22.953460  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:22.953537  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:22.977521  313474 cri.go:89] found id: ""
	I1202 21:18:22.977534  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.977541  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:22.977546  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:22.977605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:23.002292  313474 cri.go:89] found id: ""
	I1202 21:18:23.002308  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.002316  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:23.002322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:23.002394  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:23.036373  313474 cri.go:89] found id: ""
	I1202 21:18:23.036387  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.036395  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:23.036403  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:23.036415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:23.095655  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:23.095673  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:23.111535  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:23.111553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:23.173705  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:23.173715  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:23.173726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:23.236268  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:23.236289  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:25.766078  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:25.775931  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:25.775992  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:25.803734  313474 cri.go:89] found id: ""
	I1202 21:18:25.803748  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.803755  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:25.803761  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:25.803819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:25.834986  313474 cri.go:89] found id: ""
	I1202 21:18:25.834998  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.835005  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:25.835011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:25.835067  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:25.868893  313474 cri.go:89] found id: ""
	I1202 21:18:25.868906  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.868914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:25.868919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:25.868978  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:25.893444  313474 cri.go:89] found id: ""
	I1202 21:18:25.893458  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.893465  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:25.893470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:25.893535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:25.920960  313474 cri.go:89] found id: ""
	I1202 21:18:25.920981  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.921016  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:25.921022  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:25.921084  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:25.945498  313474 cri.go:89] found id: ""
	I1202 21:18:25.945512  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.945519  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:25.945524  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:25.945584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:25.970324  313474 cri.go:89] found id: ""
	I1202 21:18:25.970338  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.970345  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:25.970352  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:25.970363  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:26.026110  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:26.026130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:26.042911  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:26.042929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:26.110842  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:26.110852  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:26.110863  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:26.172311  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:26.172331  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.700308  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:28.710060  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:28.710120  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:28.735161  313474 cri.go:89] found id: ""
	I1202 21:18:28.735174  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.735181  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:28.735186  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:28.735244  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:28.759111  313474 cri.go:89] found id: ""
	I1202 21:18:28.759125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.759132  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:28.759138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:28.759195  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:28.782985  313474 cri.go:89] found id: ""
	I1202 21:18:28.782999  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.783006  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:28.783011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:28.783069  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:28.820172  313474 cri.go:89] found id: ""
	I1202 21:18:28.820186  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.820203  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:28.820208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:28.820274  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:28.850833  313474 cri.go:89] found id: ""
	I1202 21:18:28.850846  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.850863  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:28.850869  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:28.850927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:28.882012  313474 cri.go:89] found id: ""
	I1202 21:18:28.882025  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.882032  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:28.882038  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:28.882093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:28.908111  313474 cri.go:89] found id: ""
	I1202 21:18:28.908125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.908132  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:28.908139  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:28.908150  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.934318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:28.934333  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:28.989499  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:28.989518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:29.007046  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:29.007064  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:29.083779  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:29.083789  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:29.083801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.646079  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:31.657486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:31.657549  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:31.683678  313474 cri.go:89] found id: ""
	I1202 21:18:31.683692  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.683699  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:31.683704  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:31.683759  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:31.712328  313474 cri.go:89] found id: ""
	I1202 21:18:31.712342  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.712349  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:31.712354  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:31.712410  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:31.736788  313474 cri.go:89] found id: ""
	I1202 21:18:31.736802  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.736808  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:31.736814  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:31.736870  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:31.761882  313474 cri.go:89] found id: ""
	I1202 21:18:31.761896  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.761903  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:31.761908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:31.761968  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:31.785756  313474 cri.go:89] found id: ""
	I1202 21:18:31.785770  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.785778  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:31.785783  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:31.785843  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:31.820411  313474 cri.go:89] found id: ""
	I1202 21:18:31.820424  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.820431  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:31.820437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:31.820493  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:31.853589  313474 cri.go:89] found id: ""
	I1202 21:18:31.853603  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.853611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:31.853619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:31.853630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:31.921797  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:31.921807  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:31.921818  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.983142  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:31.983161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:32.019032  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:32.019047  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:32.075826  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:32.075845  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.595298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:34.606306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:34.606370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:34.629306  313474 cri.go:89] found id: ""
	I1202 21:18:34.629321  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.629328  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:34.629334  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:34.629393  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:34.653285  313474 cri.go:89] found id: ""
	I1202 21:18:34.653299  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.653305  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:34.653311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:34.653369  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:34.679517  313474 cri.go:89] found id: ""
	I1202 21:18:34.679531  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.679538  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:34.679543  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:34.679601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:34.703382  313474 cri.go:89] found id: ""
	I1202 21:18:34.703395  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.703403  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:34.703409  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:34.703472  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:34.726696  313474 cri.go:89] found id: ""
	I1202 21:18:34.726710  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.726717  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:34.726723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:34.726784  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:34.751128  313474 cri.go:89] found id: ""
	I1202 21:18:34.751141  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.751148  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:34.751153  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:34.751213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:34.775011  313474 cri.go:89] found id: ""
	I1202 21:18:34.775025  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.775032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:34.775047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:34.775057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:34.835694  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:34.835712  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.852614  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:34.852628  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:34.915032  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:34.915042  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:34.915053  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:34.976914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:34.976933  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.512733  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:37.523297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:37.523360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:37.547452  313474 cri.go:89] found id: ""
	I1202 21:18:37.547471  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.547478  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:37.547484  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:37.547553  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:37.573439  313474 cri.go:89] found id: ""
	I1202 21:18:37.573453  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.573460  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:37.573471  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:37.573529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:37.597566  313474 cri.go:89] found id: ""
	I1202 21:18:37.597579  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.597586  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:37.597593  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:37.597689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:37.622743  313474 cri.go:89] found id: ""
	I1202 21:18:37.622757  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.622764  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:37.622769  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:37.622833  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:37.650998  313474 cri.go:89] found id: ""
	I1202 21:18:37.651012  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.651019  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:37.651024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:37.651082  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:37.675113  313474 cri.go:89] found id: ""
	I1202 21:18:37.675126  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.675133  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:37.675139  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:37.675198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:37.703998  313474 cri.go:89] found id: ""
	I1202 21:18:37.704011  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.704019  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:37.704028  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:37.704039  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.731894  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:37.731909  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:37.789286  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:37.789304  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:37.806026  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:37.806041  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:37.883651  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:37.883661  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:37.883672  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.449584  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:40.459754  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:40.459815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:40.484277  313474 cri.go:89] found id: ""
	I1202 21:18:40.484290  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.484297  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:40.484303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:40.484363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:40.512957  313474 cri.go:89] found id: ""
	I1202 21:18:40.512971  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.512978  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:40.512984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:40.513043  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:40.539344  313474 cri.go:89] found id: ""
	I1202 21:18:40.539357  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.539365  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:40.539371  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:40.539439  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:40.569762  313474 cri.go:89] found id: ""
	I1202 21:18:40.569776  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.569783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:40.569789  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:40.569865  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:40.599530  313474 cri.go:89] found id: ""
	I1202 21:18:40.599589  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.599597  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:40.599603  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:40.599663  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:40.624508  313474 cri.go:89] found id: ""
	I1202 21:18:40.624521  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.624527  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:40.624533  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:40.624590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:40.654772  313474 cri.go:89] found id: ""
	I1202 21:18:40.654786  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.654793  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:40.654800  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:40.654811  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:40.671128  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:40.671146  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:40.739442  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:40.739452  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:40.739465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.802579  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:40.802600  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:40.842887  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:40.842905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.407132  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:43.417207  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:43.417283  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:43.445187  313474 cri.go:89] found id: ""
	I1202 21:18:43.445201  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.445208  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:43.445214  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:43.445270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:43.469935  313474 cri.go:89] found id: ""
	I1202 21:18:43.469949  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.469957  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:43.469962  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:43.470021  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:43.495370  313474 cri.go:89] found id: ""
	I1202 21:18:43.495383  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.495391  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:43.495396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:43.495454  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:43.519120  313474 cri.go:89] found id: ""
	I1202 21:18:43.519133  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.519149  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:43.519155  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:43.519213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:43.548201  313474 cri.go:89] found id: ""
	I1202 21:18:43.548216  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.548223  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:43.548228  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:43.548290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:43.573077  313474 cri.go:89] found id: ""
	I1202 21:18:43.573091  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.573099  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:43.573104  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:43.573166  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:43.598032  313474 cri.go:89] found id: ""
	I1202 21:18:43.598046  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.598053  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:43.598062  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:43.598072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:43.625764  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:43.625780  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.681770  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:43.681787  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:43.698012  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:43.698028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:43.764049  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:43.764060  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:43.764071  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.332493  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:46.342812  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:46.342877  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:46.367988  313474 cri.go:89] found id: ""
	I1202 21:18:46.368002  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.368018  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:46.368024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:46.368091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:46.392483  313474 cri.go:89] found id: ""
	I1202 21:18:46.392496  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.392512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:46.392518  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:46.392574  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:46.429495  313474 cri.go:89] found id: ""
	I1202 21:18:46.429514  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.429522  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:46.429527  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:46.429598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:46.455204  313474 cri.go:89] found id: ""
	I1202 21:18:46.455218  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.455225  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:46.455231  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:46.455295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:46.479783  313474 cri.go:89] found id: ""
	I1202 21:18:46.479800  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.479808  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:46.479813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:46.479880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:46.504674  313474 cri.go:89] found id: ""
	I1202 21:18:46.504688  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.504696  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:46.504701  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:46.504767  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:46.534919  313474 cri.go:89] found id: ""
	I1202 21:18:46.534933  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.534940  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:46.534948  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:46.534968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:46.591507  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:46.591526  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:46.607216  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:46.607233  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:46.672448  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:46.672459  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:46.672469  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.738404  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:46.738424  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.269367  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:49.279307  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:49.279370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:49.302419  313474 cri.go:89] found id: ""
	I1202 21:18:49.302432  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.302439  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:49.302445  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:49.302501  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:49.328004  313474 cri.go:89] found id: ""
	I1202 21:18:49.328018  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.328025  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:49.328030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:49.328088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:49.352661  313474 cri.go:89] found id: ""
	I1202 21:18:49.352675  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.352682  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:49.352687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:49.352746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:49.377363  313474 cri.go:89] found id: ""
	I1202 21:18:49.377376  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.377383  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:49.377389  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:49.377447  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:49.401369  313474 cri.go:89] found id: ""
	I1202 21:18:49.401383  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.401390  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:49.401396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:49.401461  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:49.425207  313474 cri.go:89] found id: ""
	I1202 21:18:49.425221  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.425228  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:49.425233  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:49.425295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:49.451589  313474 cri.go:89] found id: ""
	I1202 21:18:49.451604  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.451611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:49.451619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:49.451630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:49.513462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:49.513472  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:49.513482  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:49.575782  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:49.575801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.610890  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:49.610906  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:49.667106  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:49.667123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.184506  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:52.194827  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:52.194887  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:52.221289  313474 cri.go:89] found id: ""
	I1202 21:18:52.221303  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.221310  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:52.221315  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:52.221385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:52.247152  313474 cri.go:89] found id: ""
	I1202 21:18:52.247167  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.247174  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:52.247179  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:52.247240  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:52.270523  313474 cri.go:89] found id: ""
	I1202 21:18:52.270539  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.270545  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:52.270550  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:52.270610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:52.294232  313474 cri.go:89] found id: ""
	I1202 21:18:52.294246  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.294253  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:52.294259  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:52.294321  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:52.322550  313474 cri.go:89] found id: ""
	I1202 21:18:52.322563  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.322570  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:52.322576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:52.322635  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:52.350081  313474 cri.go:89] found id: ""
	I1202 21:18:52.350095  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.350103  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:52.350110  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:52.350171  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:52.373782  313474 cri.go:89] found id: ""
	I1202 21:18:52.373796  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.373817  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:52.373826  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:52.373836  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:52.429396  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:52.429415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.445303  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:52.445319  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:52.509061  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:52.509073  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:52.509087  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:52.572171  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:52.572191  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:55.105321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:55.115684  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:55.115746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:55.143285  313474 cri.go:89] found id: ""
	I1202 21:18:55.143301  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.143313  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:55.143319  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:55.143379  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:55.168631  313474 cri.go:89] found id: ""
	I1202 21:18:55.168645  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.168652  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:55.168658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:55.168718  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:55.194277  313474 cri.go:89] found id: ""
	I1202 21:18:55.194290  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.194297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:55.194303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:55.194361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:55.221594  313474 cri.go:89] found id: ""
	I1202 21:18:55.221607  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.221614  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:55.221620  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:55.221738  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:55.245639  313474 cri.go:89] found id: ""
	I1202 21:18:55.245684  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.245691  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:55.245697  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:55.245758  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:55.270064  313474 cri.go:89] found id: ""
	I1202 21:18:55.270078  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.270085  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:55.270091  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:55.270151  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:55.298494  313474 cri.go:89] found id: ""
	I1202 21:18:55.298508  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.298515  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:55.298524  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:55.298534  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:55.354337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:55.354358  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:55.371291  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:55.371306  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:55.441025  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:55.441036  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:55.441048  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:55.508470  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:55.508491  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:58.040648  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:58.052163  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:58.052231  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:58.082641  313474 cri.go:89] found id: ""
	I1202 21:18:58.082655  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.082663  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:58.082668  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:58.082727  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:58.109547  313474 cri.go:89] found id: ""
	I1202 21:18:58.109561  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.109579  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:58.109585  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:58.109687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:58.134886  313474 cri.go:89] found id: ""
	I1202 21:18:58.134900  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.134908  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:58.134913  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:58.134973  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:58.158535  313474 cri.go:89] found id: ""
	I1202 21:18:58.158549  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.158555  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:58.158561  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:58.158626  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:58.181483  313474 cri.go:89] found id: ""
	I1202 21:18:58.181498  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.181505  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:58.181510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:58.181567  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:58.207661  313474 cri.go:89] found id: ""
	I1202 21:18:58.207675  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.207682  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:58.207687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:58.207744  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:58.231079  313474 cri.go:89] found id: ""
	I1202 21:18:58.231092  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.231099  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:58.231107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:58.231117  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:58.286068  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:58.286086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:58.301966  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:58.301983  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:58.371817  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:58.371827  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:58.371838  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:58.434916  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:58.434935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:00.970468  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:00.981089  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:00.981161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:01.007840  313474 cri.go:89] found id: ""
	I1202 21:19:01.007855  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.007863  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:01.007868  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:01.007927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:01.032203  313474 cri.go:89] found id: ""
	I1202 21:19:01.032217  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.032224  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:01.032229  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:01.032300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:01.065098  313474 cri.go:89] found id: ""
	I1202 21:19:01.065111  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.065119  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:01.065124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:01.065186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:01.091481  313474 cri.go:89] found id: ""
	I1202 21:19:01.091495  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.091502  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:01.091508  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:01.091584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:01.119523  313474 cri.go:89] found id: ""
	I1202 21:19:01.119538  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.119546  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:01.119552  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:01.119617  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:01.145559  313474 cri.go:89] found id: ""
	I1202 21:19:01.145574  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.145584  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:01.145590  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:01.145699  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:01.171870  313474 cri.go:89] found id: ""
	I1202 21:19:01.171885  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.171892  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:01.171900  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:01.171929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:01.236730  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:01.236741  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:01.236752  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:01.298712  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:01.298731  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:01.327192  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:01.327213  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:01.382852  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:01.382869  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:03.899143  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:03.908997  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:03.909059  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:03.932688  313474 cri.go:89] found id: ""
	I1202 21:19:03.932701  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.932708  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:03.932714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:03.932773  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:03.957073  313474 cri.go:89] found id: ""
	I1202 21:19:03.957087  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.957095  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:03.957100  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:03.957161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:03.981206  313474 cri.go:89] found id: ""
	I1202 21:19:03.981219  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.981233  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:03.981239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:03.981301  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:04.008306  313474 cri.go:89] found id: ""
	I1202 21:19:04.008322  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.008329  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:04.008335  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:04.008401  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:04.033825  313474 cri.go:89] found id: ""
	I1202 21:19:04.033839  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.033847  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:04.033853  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:04.033912  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:04.062862  313474 cri.go:89] found id: ""
	I1202 21:19:04.062876  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.062883  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:04.062890  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:04.062957  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:04.098358  313474 cri.go:89] found id: ""
	I1202 21:19:04.098372  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.098379  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:04.098388  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:04.098398  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:04.160856  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:04.160874  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:04.176607  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:04.176625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:04.239202  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:04.239213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:04.239224  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:04.304570  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:04.304588  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:06.834974  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:06.846425  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:06.846496  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:06.874499  313474 cri.go:89] found id: ""
	I1202 21:19:06.874513  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.874520  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:06.874526  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:06.874585  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:06.899405  313474 cri.go:89] found id: ""
	I1202 21:19:06.899419  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.899426  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:06.899432  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:06.899490  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:06.927927  313474 cri.go:89] found id: ""
	I1202 21:19:06.927940  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.927947  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:06.927953  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:06.928017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:06.956416  313474 cri.go:89] found id: ""
	I1202 21:19:06.956430  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.956437  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:06.956443  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:06.956503  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:06.982016  313474 cri.go:89] found id: ""
	I1202 21:19:06.982030  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.982038  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:06.982043  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:06.982102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:07.008744  313474 cri.go:89] found id: ""
	I1202 21:19:07.008758  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.008765  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:07.008771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:07.008831  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:07.051903  313474 cri.go:89] found id: ""
	I1202 21:19:07.051917  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.051924  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:07.051933  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:07.051956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:07.111866  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:07.111885  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:07.131193  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:07.131212  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:07.197137  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:07.197148  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:07.197159  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:07.258783  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:07.258802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:09.784238  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:09.795790  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:09.795850  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:09.821880  313474 cri.go:89] found id: ""
	I1202 21:19:09.821894  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.821902  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:09.821907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:09.821970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:09.845564  313474 cri.go:89] found id: ""
	I1202 21:19:09.845579  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.845586  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:09.845617  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:09.845698  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:09.874848  313474 cri.go:89] found id: ""
	I1202 21:19:09.874862  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.874875  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:09.874880  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:09.874939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:09.899396  313474 cri.go:89] found id: ""
	I1202 21:19:09.899410  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.899417  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:09.899423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:09.899485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:09.928207  313474 cri.go:89] found id: ""
	I1202 21:19:09.928231  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.928291  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:09.928297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:09.928367  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:09.953363  313474 cri.go:89] found id: ""
	I1202 21:19:09.953386  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.953393  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:09.953400  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:09.953478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:09.977852  313474 cri.go:89] found id: ""
	I1202 21:19:09.977866  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.977873  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:09.977881  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:09.977891  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:10.035535  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:10.035554  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:10.053223  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:10.053240  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:10.129538  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:10.129549  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:10.129561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:10.196069  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:10.196089  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:12.729098  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:12.739162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:12.739221  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:12.762279  313474 cri.go:89] found id: ""
	I1202 21:19:12.762293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.762300  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:12.762305  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:12.762405  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:12.787279  313474 cri.go:89] found id: ""
	I1202 21:19:12.787293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.787300  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:12.787306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:12.787364  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:12.812545  313474 cri.go:89] found id: ""
	I1202 21:19:12.812558  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.812566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:12.812571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:12.812642  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:12.840741  313474 cri.go:89] found id: ""
	I1202 21:19:12.840755  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.840762  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:12.840767  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:12.840824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:12.868898  313474 cri.go:89] found id: ""
	I1202 21:19:12.868912  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.868919  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:12.868924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:12.868983  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:12.895296  313474 cri.go:89] found id: ""
	I1202 21:19:12.895310  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.895317  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:12.895322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:12.895382  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:12.918838  313474 cri.go:89] found id: ""
	I1202 21:19:12.918852  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.918859  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:12.918867  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:12.918880  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:12.989410  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:12.989434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:13.018849  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:13.018864  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:13.075957  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:13.075976  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:13.095483  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:13.095501  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:13.160629  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.660888  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:15.670559  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:15.670624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:15.693948  313474 cri.go:89] found id: ""
	I1202 21:19:15.693961  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.693969  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:15.693974  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:15.694041  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:15.720374  313474 cri.go:89] found id: ""
	I1202 21:19:15.720389  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.720396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:15.720401  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:15.720460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:15.745246  313474 cri.go:89] found id: ""
	I1202 21:19:15.745259  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.745267  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:15.745272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:15.745339  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:15.772221  313474 cri.go:89] found id: ""
	I1202 21:19:15.772234  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.772241  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:15.772247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:15.772317  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:15.795604  313474 cri.go:89] found id: ""
	I1202 21:19:15.795618  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.795624  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:15.795630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:15.795687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:15.824167  313474 cri.go:89] found id: ""
	I1202 21:19:15.824180  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.824187  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:15.824193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:15.824252  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:15.847367  313474 cri.go:89] found id: ""
	I1202 21:19:15.847380  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.847387  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:15.847396  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:15.847406  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:15.901801  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:15.901820  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:15.917208  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:15.917228  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:15.976565  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.976575  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:15.976586  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:16.041174  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:16.041192  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.580269  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:18.590169  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:18.590245  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:18.615027  313474 cri.go:89] found id: ""
	I1202 21:19:18.615042  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.615049  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:18.615055  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:18.615135  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:18.640491  313474 cri.go:89] found id: ""
	I1202 21:19:18.640505  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.640512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:18.640517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:18.640584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:18.665078  313474 cri.go:89] found id: ""
	I1202 21:19:18.665092  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.665099  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:18.665105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:18.665162  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:18.689844  313474 cri.go:89] found id: ""
	I1202 21:19:18.689858  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.689865  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:18.689871  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:18.689928  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:18.715165  313474 cri.go:89] found id: ""
	I1202 21:19:18.715179  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.715186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:18.715191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:18.715250  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:18.740098  313474 cri.go:89] found id: ""
	I1202 21:19:18.740111  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.740118  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:18.740124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:18.740181  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:18.764406  313474 cri.go:89] found id: ""
	I1202 21:19:18.764420  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.764427  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:18.764435  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:18.764448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.795780  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:18.795801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:18.851180  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:18.851199  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:18.867072  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:18.867088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:18.932904  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:18.932917  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:18.932930  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.499766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:21.511750  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:21.511824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:21.541598  313474 cri.go:89] found id: ""
	I1202 21:19:21.541612  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.541619  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:21.541624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:21.541710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:21.565690  313474 cri.go:89] found id: ""
	I1202 21:19:21.565705  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.565712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:21.565717  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:21.565786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:21.588975  313474 cri.go:89] found id: ""
	I1202 21:19:21.588989  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.588996  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:21.589002  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:21.589060  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:21.616075  313474 cri.go:89] found id: ""
	I1202 21:19:21.616100  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.616108  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:21.616114  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:21.616189  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:21.640380  313474 cri.go:89] found id: ""
	I1202 21:19:21.640393  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.640410  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:21.640416  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:21.640473  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:21.664881  313474 cri.go:89] found id: ""
	I1202 21:19:21.664895  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.664912  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:21.664919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:21.664976  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:21.688940  313474 cri.go:89] found id: ""
	I1202 21:19:21.688961  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.688968  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:21.688976  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:21.688986  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:21.747031  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:21.747050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:21.762969  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:21.762988  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:21.829106  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:21.829117  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:21.829142  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.890717  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:21.890735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:24.418721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:24.428805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:24.428867  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:24.454807  313474 cri.go:89] found id: ""
	I1202 21:19:24.454820  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.454827  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:24.454844  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:24.454905  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:24.479376  313474 cri.go:89] found id: ""
	I1202 21:19:24.479390  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.479396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:24.479402  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:24.479459  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:24.504161  313474 cri.go:89] found id: ""
	I1202 21:19:24.504174  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.504181  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:24.504195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:24.504257  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:24.529438  313474 cri.go:89] found id: ""
	I1202 21:19:24.529452  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.529460  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:24.529466  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:24.529540  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:24.554237  313474 cri.go:89] found id: ""
	I1202 21:19:24.554251  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.554258  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:24.554264  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:24.554322  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:24.583978  313474 cri.go:89] found id: ""
	I1202 21:19:24.583992  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.583999  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:24.584005  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:24.584071  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:24.608672  313474 cri.go:89] found id: ""
	I1202 21:19:24.608686  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.608694  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:24.608702  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:24.608711  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:24.663382  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:24.663399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:24.678935  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:24.678953  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:24.741560  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:24.741571  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:24.741584  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:24.805991  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:24.806014  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:27.332486  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:27.343923  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:27.343980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:27.370846  313474 cri.go:89] found id: ""
	I1202 21:19:27.370862  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.370869  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:27.370874  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:27.370933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:27.394765  313474 cri.go:89] found id: ""
	I1202 21:19:27.394779  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.394786  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:27.394791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:27.394858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:27.418228  313474 cri.go:89] found id: ""
	I1202 21:19:27.418241  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.418248  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:27.418254  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:27.418312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:27.442428  313474 cri.go:89] found id: ""
	I1202 21:19:27.442441  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.442448  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:27.442454  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:27.442516  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:27.467409  313474 cri.go:89] found id: ""
	I1202 21:19:27.467423  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.467430  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:27.467435  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:27.467492  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:27.490186  313474 cri.go:89] found id: ""
	I1202 21:19:27.490200  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.490207  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:27.490213  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:27.490270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:27.515032  313474 cri.go:89] found id: ""
	I1202 21:19:27.515046  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.515054  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:27.515062  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:27.515072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:27.570118  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:27.570137  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:27.585958  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:27.585974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:27.649259  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:27.649269  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:27.649288  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:27.711120  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:27.711140  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:30.243770  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:30.255318  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:30.255385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:30.279952  313474 cri.go:89] found id: ""
	I1202 21:19:30.279966  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.279974  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:30.279979  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:30.280039  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:30.320036  313474 cri.go:89] found id: ""
	I1202 21:19:30.320049  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.320056  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:30.320061  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:30.320119  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:30.351365  313474 cri.go:89] found id: ""
	I1202 21:19:30.351378  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.351385  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:30.351391  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:30.351449  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:30.378208  313474 cri.go:89] found id: ""
	I1202 21:19:30.378221  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.378228  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:30.378234  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:30.378293  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:30.404248  313474 cri.go:89] found id: ""
	I1202 21:19:30.404262  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.404268  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:30.404274  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:30.404331  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:30.428678  313474 cri.go:89] found id: ""
	I1202 21:19:30.428691  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.428698  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:30.428714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:30.428786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:30.452008  313474 cri.go:89] found id: ""
	I1202 21:19:30.452021  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.452039  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:30.452047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:30.452057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:30.506509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:30.506530  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:30.522444  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:30.522464  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:30.585091  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:30.585102  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:30.585112  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:30.649461  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:30.649484  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:33.184340  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:33.195406  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:33.195468  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:33.220999  313474 cri.go:89] found id: ""
	I1202 21:19:33.221013  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.221020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:33.221026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:33.221087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:33.245046  313474 cri.go:89] found id: ""
	I1202 21:19:33.245060  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.245068  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:33.245073  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:33.245134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:33.268397  313474 cri.go:89] found id: ""
	I1202 21:19:33.268410  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.268417  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:33.268423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:33.268485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:33.304556  313474 cri.go:89] found id: ""
	I1202 21:19:33.304569  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.304577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:33.304582  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:33.304643  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:33.335992  313474 cri.go:89] found id: ""
	I1202 21:19:33.336006  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.336013  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:33.336019  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:33.336086  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:33.367967  313474 cri.go:89] found id: ""
	I1202 21:19:33.367980  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.367989  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:33.367995  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:33.368052  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:33.393839  313474 cri.go:89] found id: ""
	I1202 21:19:33.393853  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.393860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:33.393867  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:33.393877  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:33.448875  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:33.448894  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:33.464807  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:33.464822  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:33.531228  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:33.531238  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:33.531248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:33.592933  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:33.592951  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:36.121943  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:36.132447  313474 kubeadm.go:602] duration metric: took 4m4.151661323s to restartPrimaryControlPlane
	W1202 21:19:36.132510  313474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:19:36.132588  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:19:36.539188  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:19:36.552660  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:19:36.560203  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:19:36.560257  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:19:36.567605  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:19:36.567615  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:19:36.567669  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:19:36.575238  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:19:36.575292  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:19:36.582200  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:19:36.589483  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:19:36.589539  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:19:36.596652  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.604117  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:19:36.604180  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.611312  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:19:36.619074  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:19:36.619140  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:19:36.626580  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:19:36.665764  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:19:36.665850  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:19:36.739165  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:19:36.739244  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:19:36.739289  313474 kubeadm.go:319] OS: Linux
	I1202 21:19:36.739345  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:19:36.739401  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:19:36.739460  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:19:36.739515  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:19:36.739574  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:19:36.739631  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:19:36.739681  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:19:36.739743  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:19:36.739800  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:19:36.802641  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:19:36.802776  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:19:36.802889  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:19:36.810139  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:19:36.815519  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:19:36.815612  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:19:36.815684  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:19:36.815766  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:19:36.815832  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:19:36.815906  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:19:36.815965  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:19:36.816035  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:19:36.816096  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:19:36.816180  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:19:36.816258  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:19:36.816301  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:19:36.816363  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:19:36.979466  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:19:37.030688  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:19:37.178864  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:19:37.287458  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:19:37.759486  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:19:37.759977  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:19:37.764136  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:19:37.767507  313474 out.go:252]   - Booting up control plane ...
	I1202 21:19:37.767615  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:19:37.767697  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:19:37.768187  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:19:37.789119  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:19:37.789389  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:19:37.796801  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:19:37.797075  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:19:37.797116  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:19:37.935526  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:19:37.935655  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:23:37.935181  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000055683s
	I1202 21:23:37.935206  313474 kubeadm.go:319] 
	I1202 21:23:37.935262  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:23:37.935294  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:23:37.935397  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:23:37.935402  313474 kubeadm.go:319] 
	I1202 21:23:37.935505  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:23:37.935535  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:23:37.935565  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:23:37.935567  313474 kubeadm.go:319] 
	I1202 21:23:37.939509  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:23:37.940015  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:23:37.940174  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:23:37.940488  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:23:37.940494  313474 kubeadm.go:319] 
	I1202 21:23:37.940592  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:23:37.940735  313474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000055683s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:23:37.940819  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:23:38.352160  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:23:38.364903  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:23:38.364957  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:23:38.373626  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:23:38.373635  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:23:38.373703  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:23:38.380912  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:23:38.380966  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:23:38.387986  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:23:38.395511  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:23:38.395567  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:23:38.403067  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.410435  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:23:38.410491  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.417648  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:23:38.425411  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:23:38.425466  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:23:38.432690  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:23:38.469901  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:23:38.470170  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:23:38.543545  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:23:38.543611  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:23:38.543646  313474 kubeadm.go:319] OS: Linux
	I1202 21:23:38.543689  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:23:38.543736  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:23:38.543782  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:23:38.543829  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:23:38.543876  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:23:38.543922  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:23:38.543966  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:23:38.544013  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:23:38.544058  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:23:38.612266  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:23:38.612377  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:23:38.612479  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:23:38.617939  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:23:38.623176  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:23:38.623272  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:23:38.623347  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:23:38.623429  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:23:38.623494  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:23:38.623569  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:23:38.623628  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:23:38.623699  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:23:38.623765  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:23:38.623849  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:23:38.623933  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:23:38.623979  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:23:38.624034  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:23:39.195644  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:23:40.418759  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:23:40.662567  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:23:41.331428  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:23:41.582387  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:23:41.582932  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:23:41.585414  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:23:41.588388  313474 out.go:252]   - Booting up control plane ...
	I1202 21:23:41.588487  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:23:41.588564  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:23:41.588629  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:23:41.609723  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:23:41.609836  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:23:41.617428  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:23:41.617997  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:23:41.618040  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:23:41.754122  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:23:41.754238  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:27:41.753164  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001114938s
	I1202 21:27:41.753189  313474 kubeadm.go:319] 
	I1202 21:27:41.753242  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:27:41.753272  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:27:41.753369  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:27:41.753373  313474 kubeadm.go:319] 
	I1202 21:27:41.753470  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:27:41.753499  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:27:41.753527  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:27:41.753530  313474 kubeadm.go:319] 
	I1202 21:27:41.757163  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:27:41.757586  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:27:41.757709  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:27:41.757943  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:27:41.757948  313474 kubeadm.go:319] 
	I1202 21:27:41.758016  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:27:41.758065  313474 kubeadm.go:403] duration metric: took 12m9.810714629s to StartCluster
	I1202 21:27:41.758097  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:27:41.758157  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:27:41.783479  313474 cri.go:89] found id: ""
	I1202 21:27:41.783492  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.783500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:27:41.783505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:27:41.783577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:27:41.814610  313474 cri.go:89] found id: ""
	I1202 21:27:41.814624  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.814631  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:27:41.814644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:27:41.814702  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:27:41.844545  313474 cri.go:89] found id: ""
	I1202 21:27:41.844559  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.844566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:27:41.844571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:27:41.844630  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:27:41.876235  313474 cri.go:89] found id: ""
	I1202 21:27:41.876250  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.876257  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:27:41.876262  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:27:41.876320  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:27:41.899944  313474 cri.go:89] found id: ""
	I1202 21:27:41.899957  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.899964  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:27:41.899969  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:27:41.900027  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:27:41.924640  313474 cri.go:89] found id: ""
	I1202 21:27:41.924653  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.924660  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:27:41.924666  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:27:41.924723  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:27:41.951344  313474 cri.go:89] found id: ""
	I1202 21:27:41.951358  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.951365  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:27:41.951373  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:27:41.951383  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:27:42.009004  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:27:42.009028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:27:42.033968  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:27:42.033989  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:27:42.114849  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:27:42.114863  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:27:42.114875  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:27:42.193571  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:27:42.193593  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:27:42.259231  313474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:27:42.259270  313474 out.go:285] * 
	W1202 21:27:42.259601  313474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.259616  313474 out.go:285] * 
	W1202 21:27:42.262291  313474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:27:42.269405  313474 out.go:203] 
	W1202 21:27:42.272139  313474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.272287  313474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:27:42.272371  313474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:27:42.276351  313474 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609401010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609411414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609426076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609435913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609447105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609462194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609496941Z" level=info msg="runtime interface created"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609503784Z" level=info msg="created NRI interface"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609513794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609548107Z" level=info msg="Connect containerd service"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609923390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.610459300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.628985739Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.629232566Z" level=info msg="Start recovering state"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630271509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630432538Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655348240Z" level=info msg="Start event monitor"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655522692Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655586969Z" level=info msg="Start streaming server"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655657638Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655717968Z" level=info msg="runtime interface starting up..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655774631Z" level=info msg="starting plugins..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655837464Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:15:30 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.657496581Z" level=info msg="containerd successfully booted in 0.074787s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:29:57.705239   23726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:57.705749   23726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:57.707538   23726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:57.708067   23726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:57.709670   23726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:29:57 up  3:12,  0 user,  load average: 0.81, 0.32, 0.51
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:29:54 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:55 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 02 21:29:55 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:55 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:55 functional-753958 kubelet[23553]: E1202 21:29:55.349543   23553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:55 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:55 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 02 21:29:56 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:56 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:56 functional-753958 kubelet[23583]: E1202 21:29:56.097761   23583 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 02 21:29:56 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:56 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:56 functional-753958 kubelet[23630]: E1202 21:29:56.835815   23630 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:56 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:57 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 501.
	Dec 02 21:29:57 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:57 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:57 functional-753958 kubelet[23692]: E1202 21:29:57.598787   23692 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:57 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:57 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (384.061714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-753958 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-753958 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.740993ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-753958 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-753958 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-753958 describe po hello-node-connect: exit status 1 (59.946385ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-753958 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-753958 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-753958 logs -l app=hello-node-connect: exit status 1 (59.717076ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-753958 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-753958 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-753958 describe svc hello-node-connect: exit status 1 (64.179708ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-753958 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (303.978518ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-753958 cache reload                                                                                                                               │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ ssh     │ functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │ 02 Dec 25 21:15 UTC │
	│ kubectl │ functional-753958 kubectl -- --context functional-753958 get pods                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ start   │ -p functional-753958 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:15 UTC │                     │
	│ cp      │ functional-753958 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ config  │ functional-753958 config unset cpus                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ config  │ functional-753958 config get cpus                                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │                     │
	│ config  │ functional-753958 config set cpus 2                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ config  │ functional-753958 config get cpus                                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ config  │ functional-753958 config unset cpus                                                                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ ssh     │ functional-753958 ssh -n functional-753958 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ config  │ functional-753958 config get cpus                                                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │                     │
	│ ssh     │ functional-753958 ssh echo hello                                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ cp      │ functional-753958 cp functional-753958:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2827847343/001/cp-test.txt │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ ssh     │ functional-753958 ssh cat /etc/hostname                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ ssh     │ functional-753958 ssh -n functional-753958 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ tunnel  │ functional-753958 tunnel --alsologtostderr                                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │                     │
	│ tunnel  │ functional-753958 tunnel --alsologtostderr                                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │                     │
	│ cp      │ functional-753958 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ tunnel  │ functional-753958 tunnel --alsologtostderr                                                                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │                     │
	│ ssh     │ functional-753958 ssh -n functional-753958 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:27 UTC │ 02 Dec 25 21:27 UTC │
	│ addons  │ functional-753958 addons list                                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ addons  │ functional-753958 addons list -o json                                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:15:27
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:15:27.807151  313474 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:15:27.807260  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807264  313474 out.go:374] Setting ErrFile to fd 2...
	I1202 21:15:27.807268  313474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:15:27.807610  313474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:15:27.808015  313474 out.go:368] Setting JSON to false
	I1202 21:15:27.809366  313474 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10666,"bootTime":1764699462,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:15:27.809431  313474 start.go:143] virtualization:  
	I1202 21:15:27.812823  313474 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:15:27.815796  313474 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:15:27.816009  313474 notify.go:221] Checking for updates...
	I1202 21:15:27.821378  313474 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:15:27.824158  313474 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:15:27.826979  313474 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:15:27.829780  313474 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:15:27.832616  313474 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:15:27.835951  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:27.836043  313474 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:15:27.868236  313474 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:15:27.868329  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.931411  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.921542243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.931507  313474 docker.go:319] overlay module found
	I1202 21:15:27.934670  313474 out.go:179] * Using the docker driver based on existing profile
	I1202 21:15:27.937620  313474 start.go:309] selected driver: docker
	I1202 21:15:27.937631  313474 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.937764  313474 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:15:27.937862  313474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:15:27.995269  313474 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-02 21:15:27.986382161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:15:27.995660  313474 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 21:15:27.995688  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:27.995745  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:27.995788  313474 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:27.998840  313474 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
	I1202 21:15:28.001915  313474 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 21:15:28.005631  313474 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 21:15:28.008845  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:28.008946  313474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 21:15:28.029517  313474 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 21:15:28.029530  313474 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 21:15:28.078709  313474 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 21:15:28.277463  313474 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 21:15:28.277635  313474 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
	I1202 21:15:28.277718  313474 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277817  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 21:15:28.277826  313474 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.54µs
	I1202 21:15:28.277840  313474 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 21:15:28.277851  313474 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277891  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 21:15:28.277896  313474 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 46.374µs
	I1202 21:15:28.277901  313474 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277913  313474 cache.go:243] Successfully downloaded all kic artifacts
	I1202 21:15:28.277910  313474 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277949  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 21:15:28.277954  313474 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 44.659µs
	I1202 21:15:28.277951  313474 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277959  313474 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 21:15:28.277969  313474 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.277991  313474 start.go:364] duration metric: took 28.011µs to acquireMachinesLock for "functional-753958"
	I1202 21:15:28.277998  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 21:15:28.278004  313474 start.go:96] Skipping create...Using existing machine configuration
	I1202 21:15:28.278003  313474 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.797µs
	I1202 21:15:28.278008  313474 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278008  313474 fix.go:54] fixHost starting: 
	I1202 21:15:28.278015  313474 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278051  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 21:15:28.278067  313474 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 40.63µs
	I1202 21:15:28.278075  313474 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 21:15:28.278084  313474 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278133  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 21:15:28.278144  313474 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 58.148µs
	I1202 21:15:28.278154  313474 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 21:15:28.278163  313474 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278201  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 21:15:28.278206  313474 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 44.323µs
	I1202 21:15:28.278211  313474 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 21:15:28.278227  313474 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 21:15:28.278272  313474 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
	I1202 21:15:28.278274  313474 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 21:15:28.278279  313474 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 53.693µs
	I1202 21:15:28.278284  313474 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 21:15:28.278293  313474 cache.go:87] Successfully saved all images to host disk.
	I1202 21:15:28.303149  313474 fix.go:112] recreateIfNeeded on functional-753958: state=Running err=<nil>
	W1202 21:15:28.303168  313474 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 21:15:28.306592  313474 out.go:252] * Updating the running docker "functional-753958" container ...
	I1202 21:15:28.306620  313474 machine.go:94] provisionDockerMachine start ...
	I1202 21:15:28.306711  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.331641  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.331992  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.331999  313474 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 21:15:28.485262  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.485277  313474 ubuntu.go:182] provisioning hostname "functional-753958"
	I1202 21:15:28.485346  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.502136  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.502454  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.502463  313474 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
	I1202 21:15:28.662872  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
	
	I1202 21:15:28.662941  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.680996  313474 main.go:143] libmachine: Using SSH client type: native
	I1202 21:15:28.681283  313474 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1202 21:15:28.681296  313474 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 21:15:28.829833  313474 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 21:15:28.829849  313474 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 21:15:28.829870  313474 ubuntu.go:190] setting up certificates
	I1202 21:15:28.829878  313474 provision.go:84] configureAuth start
	I1202 21:15:28.829936  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:28.847119  313474 provision.go:143] copyHostCerts
	I1202 21:15:28.847182  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 21:15:28.847194  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 21:15:28.847267  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 21:15:28.847367  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 21:15:28.847372  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 21:15:28.847403  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 21:15:28.847459  313474 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 21:15:28.847462  313474 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 21:15:28.847485  313474 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 21:15:28.847574  313474 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
	I1202 21:15:28.960674  313474 provision.go:177] copyRemoteCerts
	I1202 21:15:28.960733  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 21:15:28.960772  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:28.978043  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.081719  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 21:15:29.105765  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 21:15:29.122371  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 21:15:29.139343  313474 provision.go:87] duration metric: took 309.452187ms to configureAuth
	I1202 21:15:29.139359  313474 ubuntu.go:206] setting minikube options for container-runtime
	I1202 21:15:29.139545  313474 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:15:29.139550  313474 machine.go:97] duration metric: took 832.92543ms to provisionDockerMachine
	I1202 21:15:29.139557  313474 start.go:293] postStartSetup for "functional-753958" (driver="docker")
	I1202 21:15:29.139567  313474 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 21:15:29.139623  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 21:15:29.139660  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.156608  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.261796  313474 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 21:15:29.265154  313474 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 21:15:29.265170  313474 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 21:15:29.265181  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 21:15:29.265234  313474 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 21:15:29.265309  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 21:15:29.265381  313474 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
	I1202 21:15:29.265422  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
	I1202 21:15:29.272853  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:29.290463  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
	I1202 21:15:29.307373  313474 start.go:296] duration metric: took 167.802474ms for postStartSetup
	I1202 21:15:29.307459  313474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:15:29.307497  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.324791  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.426726  313474 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 21:15:29.431481  313474 fix.go:56] duration metric: took 1.153466989s for fixHost
	I1202 21:15:29.431495  313474 start.go:83] releasing machines lock for "functional-753958", held for 1.153497537s
	I1202 21:15:29.431566  313474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
	I1202 21:15:29.447801  313474 ssh_runner.go:195] Run: cat /version.json
	I1202 21:15:29.447846  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.447885  313474 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 21:15:29.447935  313474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
	I1202 21:15:29.467421  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.471596  313474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
	I1202 21:15:29.659911  313474 ssh_runner.go:195] Run: systemctl --version
	I1202 21:15:29.666244  313474 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 21:15:29.670444  313474 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 21:15:29.670514  313474 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 21:15:29.678098  313474 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 21:15:29.678112  313474 start.go:496] detecting cgroup driver to use...
	I1202 21:15:29.678141  313474 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 21:15:29.678186  313474 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 21:15:29.694041  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 21:15:29.710665  313474 docker.go:218] disabling cri-docker service (if available) ...
	I1202 21:15:29.710716  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 21:15:29.728421  313474 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 21:15:29.743568  313474 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 21:15:29.860902  313474 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 21:15:29.989688  313474 docker.go:234] disabling docker service ...
	I1202 21:15:29.989770  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 21:15:30.008558  313474 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 21:15:30.033480  313474 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 21:15:30.168415  313474 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 21:15:30.289508  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 21:15:30.302465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 21:15:30.316926  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 21:15:30.325512  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 21:15:30.334372  313474 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 21:15:30.334439  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 21:15:30.343106  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.351679  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 21:15:30.359860  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 21:15:30.368460  313474 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 21:15:30.376324  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 21:15:30.384579  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 21:15:30.393108  313474 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 21:15:30.401480  313474 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 21:15:30.408867  313474 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 21:15:30.415924  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.533792  313474 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 21:15:30.657833  313474 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 21:15:30.657894  313474 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 21:15:30.661737  313474 start.go:564] Will wait 60s for crictl version
	I1202 21:15:30.661805  313474 ssh_runner.go:195] Run: which crictl
	I1202 21:15:30.665271  313474 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 21:15:30.691831  313474 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 21:15:30.691893  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.710586  313474 ssh_runner.go:195] Run: containerd --version
	I1202 21:15:30.734130  313474 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 21:15:30.737177  313474 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 21:15:30.753095  313474 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 21:15:30.760367  313474 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1202 21:15:30.763216  313474 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 21:15:30.763354  313474 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 21:15:30.763426  313474 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 21:15:30.788120  313474 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 21:15:30.788132  313474 cache_images.go:86] Images are preloaded, skipping loading
	I1202 21:15:30.788138  313474 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
	I1202 21:15:30.788245  313474 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 21:15:30.788311  313474 ssh_runner.go:195] Run: sudo crictl info
	I1202 21:15:30.816149  313474 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1202 21:15:30.816166  313474 cni.go:84] Creating CNI manager for ""
	I1202 21:15:30.816175  313474 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:15:30.816190  313474 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 21:15:30.816220  313474 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubel
etConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 21:15:30.816350  313474 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-753958"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 21:15:30.816417  313474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 21:15:30.824592  313474 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 21:15:30.824650  313474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 21:15:30.832172  313474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 21:15:30.844549  313474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 21:15:30.856965  313474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2087 bytes)
	I1202 21:15:30.869111  313474 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 21:15:30.872973  313474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 21:15:30.993888  313474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 21:15:31.292555  313474 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
	I1202 21:15:31.292567  313474 certs.go:195] generating shared ca certs ...
	I1202 21:15:31.292581  313474 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 21:15:31.292714  313474 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 21:15:31.292766  313474 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 21:15:31.292772  313474 certs.go:257] generating profile certs ...
	I1202 21:15:31.292864  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
	I1202 21:15:31.292921  313474 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
	I1202 21:15:31.292963  313474 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
	I1202 21:15:31.293076  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 21:15:31.293105  313474 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 21:15:31.293112  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 21:15:31.293138  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 21:15:31.293160  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 21:15:31.293184  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 21:15:31.293230  313474 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 21:15:31.293875  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 21:15:31.313092  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 21:15:31.332062  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 21:15:31.351302  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 21:15:31.370658  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 21:15:31.387720  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 21:15:31.405248  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 21:15:31.422664  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 21:15:31.440135  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 21:15:31.457687  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 21:15:31.475495  313474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 21:15:31.492183  313474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 21:15:31.504166  313474 ssh_runner.go:195] Run: openssl version
	I1202 21:15:31.510525  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 21:15:31.518840  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522541  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.522596  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 21:15:31.563265  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 21:15:31.571112  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 21:15:31.579437  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583195  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.583250  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 21:15:31.628890  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 21:15:31.636777  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 21:15:31.644711  313474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648206  313474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.648271  313474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 21:15:31.689010  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 21:15:31.696812  313474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 21:15:31.700482  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 21:15:31.740999  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 21:15:31.782731  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 21:15:31.823250  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 21:15:31.865611  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 21:15:31.906492  313474 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 21:15:31.947359  313474 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:15:31.947441  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 21:15:31.947511  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:31.973182  313474 cri.go:89] found id: ""
	I1202 21:15:31.973243  313474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 21:15:31.980768  313474 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 21:15:31.980777  313474 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 21:15:31.980838  313474 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 21:15:31.988019  313474 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:31.988518  313474 kubeconfig.go:125] found "functional-753958" server: "https://192.168.49.2:8441"
	I1202 21:15:31.989827  313474 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 21:15:31.997696  313474 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:00:56.754776837 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 21:15:30.864977782 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1202 21:15:31.997711  313474 kubeadm.go:1161] stopping kube-system containers ...
	I1202 21:15:31.997724  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1202 21:15:31.997791  313474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 21:15:32.028400  313474 cri.go:89] found id: ""
	I1202 21:15:32.028460  313474 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 21:15:32.046252  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:15:32.054174  313474 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  2 21:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  2 21:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  2 21:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  2 21:05 /etc/kubernetes/scheduler.conf
	
	I1202 21:15:32.054235  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:15:32.061845  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:15:32.069217  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.069283  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:15:32.076901  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.084278  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.084333  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:15:32.091360  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:15:32.098582  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 21:15:32.098635  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:15:32.105786  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:15:32.113101  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:32.157271  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.778908  313474 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.621612732s)
	I1202 21:15:33.778983  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:33.980110  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.046494  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 21:15:34.096642  313474 api_server.go:52] waiting for apiserver process to appear ...
	I1202 21:15:34.096721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:34.596907  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.097723  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:35.597306  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.096830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:36.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.096902  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:37.597594  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.097418  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:38.596863  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.096945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:39.596885  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.097285  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:40.597766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.097086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:41.597610  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.097762  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:42.597458  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.097372  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:43.596919  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.096844  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:44.597785  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.097138  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:45.597877  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.096835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:46.596922  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.097709  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:47.597777  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.097634  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:48.597037  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.097698  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:49.597298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.097150  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:50.596854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.097637  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:51.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.097490  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:52.597734  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.097878  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:53.597585  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.097045  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:54.596935  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.096967  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:55.597277  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.097741  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:56.597498  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.097835  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:57.596980  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.097825  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:58.597397  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.097737  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:15:59.597771  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.097000  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:00.597596  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.096857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:01.596807  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.096858  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:02.596921  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.097782  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:03.597168  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.097826  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:04.597834  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:05.597015  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.097323  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:06.596890  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.096868  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:07.597441  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.097848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:08.596805  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.096809  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:09.597086  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.097186  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:10.597613  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.096962  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:11.596871  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.097854  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:12.596857  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.096839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:13.596917  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.097213  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:14.596830  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.097886  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:15.597752  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.096793  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:16.597667  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.096901  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:17.597296  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.097838  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:18.597565  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.097476  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:19.597700  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.096912  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:20.597010  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.097503  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:21.596848  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.096818  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:22.596913  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.097537  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:23.596855  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.096911  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:24.596909  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.097013  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:25.596904  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.097839  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:26.596939  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.097272  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:27.597856  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.097301  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:28.596953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.096893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:29.597192  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.097860  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:30.597517  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.097502  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:31.597497  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.097081  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:32.597504  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.097354  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:33.596893  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:34.097219  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:34.097318  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:34.124123  313474 cri.go:89] found id: ""
	I1202 21:16:34.124137  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.124144  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:34.124150  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:34.124209  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:34.149042  313474 cri.go:89] found id: ""
	I1202 21:16:34.149056  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.149063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:34.149069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:34.149127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:34.172796  313474 cri.go:89] found id: ""
	I1202 21:16:34.172810  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.172817  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:34.172823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:34.172888  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:34.199775  313474 cri.go:89] found id: ""
	I1202 21:16:34.199789  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.199796  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:34.199801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:34.199858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:34.223410  313474 cri.go:89] found id: ""
	I1202 21:16:34.223424  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.223431  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:34.223436  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:34.223542  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:34.248663  313474 cri.go:89] found id: ""
	I1202 21:16:34.248677  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.248683  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:34.248689  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:34.248747  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:34.272612  313474 cri.go:89] found id: ""
	I1202 21:16:34.272626  313474 logs.go:282] 0 containers: []
	W1202 21:16:34.272633  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:34.272641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:34.272650  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:34.304889  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:34.304905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:34.363275  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:34.363294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:34.379039  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:34.379054  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:34.446716  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:34.438632   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.439203   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441070   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.441841   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:34.443136   11328 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:34.446728  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:34.446739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.010773  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:37.023010  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:37.023081  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:37.074765  313474 cri.go:89] found id: ""
	I1202 21:16:37.074779  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.074786  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:37.074791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:37.074849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:37.105604  313474 cri.go:89] found id: ""
	I1202 21:16:37.105617  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.105624  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:37.105630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:37.105731  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:37.135381  313474 cri.go:89] found id: ""
	I1202 21:16:37.135395  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.135402  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:37.135407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:37.135465  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:37.159378  313474 cri.go:89] found id: ""
	I1202 21:16:37.159391  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.159398  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:37.159404  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:37.159460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:37.184079  313474 cri.go:89] found id: ""
	I1202 21:16:37.184093  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.184100  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:37.184105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:37.184266  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:37.208512  313474 cri.go:89] found id: ""
	I1202 21:16:37.208526  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.208533  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:37.208539  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:37.208598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:37.231722  313474 cri.go:89] found id: ""
	I1202 21:16:37.231735  313474 logs.go:282] 0 containers: []
	W1202 21:16:37.231742  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:37.231750  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:37.231760  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:37.247154  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:37.247171  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:37.311439  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:37.303898   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.304432   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306024   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.306447   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:37.307866   11422 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:37.311449  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:37.311459  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:37.374896  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:37.374916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:37.402545  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:37.402561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:39.959953  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:39.969383  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:39.969445  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:39.998437  313474 cri.go:89] found id: ""
	I1202 21:16:39.998450  313474 logs.go:282] 0 containers: []
	W1202 21:16:39.998457  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:39.998463  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:39.998519  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:40.079783  313474 cri.go:89] found id: ""
	I1202 21:16:40.079799  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.079807  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:40.079813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:40.079882  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:40.112177  313474 cri.go:89] found id: ""
	I1202 21:16:40.112203  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.112210  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:40.112217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:40.112289  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:40.148805  313474 cri.go:89] found id: ""
	I1202 21:16:40.148820  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.148828  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:40.148834  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:40.148918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:40.180826  313474 cri.go:89] found id: ""
	I1202 21:16:40.180841  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.180848  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:40.180855  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:40.180930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:40.209004  313474 cri.go:89] found id: ""
	I1202 21:16:40.209018  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.209025  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:40.209032  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:40.209091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:40.234748  313474 cri.go:89] found id: ""
	I1202 21:16:40.234762  313474 logs.go:282] 0 containers: []
	W1202 21:16:40.234769  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:40.234778  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:40.234788  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:40.297246  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:40.289556   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.290130   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.291723   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.292196   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:40.293755   11526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:40.297257  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:40.297268  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:40.359276  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:40.359297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:40.389165  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:40.389181  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:40.447977  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:40.447997  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:42.964946  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:42.974927  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:42.974987  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:42.997720  313474 cri.go:89] found id: ""
	I1202 21:16:42.997734  313474 logs.go:282] 0 containers: []
	W1202 21:16:42.997741  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:42.997747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:42.997808  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:43.022947  313474 cri.go:89] found id: ""
	I1202 21:16:43.022961  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.022968  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:43.022973  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:43.023034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:43.053855  313474 cri.go:89] found id: ""
	I1202 21:16:43.053869  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.053876  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:43.053881  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:43.053941  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:43.086462  313474 cri.go:89] found id: ""
	I1202 21:16:43.086475  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.086482  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:43.086487  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:43.086545  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:43.112776  313474 cri.go:89] found id: ""
	I1202 21:16:43.112790  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.112798  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:43.112803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:43.112861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:43.137549  313474 cri.go:89] found id: ""
	I1202 21:16:43.137563  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.137570  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:43.137576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:43.137695  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:43.161710  313474 cri.go:89] found id: ""
	I1202 21:16:43.161724  313474 logs.go:282] 0 containers: []
	W1202 21:16:43.161731  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:43.161739  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:43.161751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:43.217891  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:43.217910  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:43.233516  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:43.233539  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:43.295127  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:43.287570   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.288255   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.289907   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.290345   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:43.291827   11638 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:43.295145  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:43.295157  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:43.361614  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:43.361638  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:45.891122  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:45.901162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:45.901219  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:45.924968  313474 cri.go:89] found id: ""
	I1202 21:16:45.924982  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.924989  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:45.924994  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:45.925064  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:45.960327  313474 cri.go:89] found id: ""
	I1202 21:16:45.960350  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.960357  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:45.960362  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:45.960428  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:45.988303  313474 cri.go:89] found id: ""
	I1202 21:16:45.988317  313474 logs.go:282] 0 containers: []
	W1202 21:16:45.988324  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:45.988330  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:45.988395  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:46.015569  313474 cri.go:89] found id: ""
	I1202 21:16:46.015582  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.015590  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:46.015595  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:46.015656  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:46.042481  313474 cri.go:89] found id: ""
	I1202 21:16:46.042494  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.042511  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:46.042517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:46.042583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:46.076870  313474 cri.go:89] found id: ""
	I1202 21:16:46.076910  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.076918  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:46.076924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:46.076995  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:46.110449  313474 cri.go:89] found id: ""
	I1202 21:16:46.110490  313474 logs.go:282] 0 containers: []
	W1202 21:16:46.110498  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:46.110514  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:46.110525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:46.188559  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:46.179077   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.179721   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.181442   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.182155   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:46.183999   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:46.188579  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:46.188590  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:46.253578  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:46.253598  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:46.281754  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:46.281771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:46.338833  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:46.338850  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:48.855152  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:48.865294  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:48.865357  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:48.889825  313474 cri.go:89] found id: ""
	I1202 21:16:48.889839  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.889846  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:48.889852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:48.889911  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:48.913688  313474 cri.go:89] found id: ""
	I1202 21:16:48.913705  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.913712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:48.913718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:48.913781  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:48.937742  313474 cri.go:89] found id: ""
	I1202 21:16:48.937756  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.937763  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:48.937779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:48.937837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:48.961294  313474 cri.go:89] found id: ""
	I1202 21:16:48.961308  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.961315  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:48.961320  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:48.961378  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:48.985846  313474 cri.go:89] found id: ""
	I1202 21:16:48.985860  313474 logs.go:282] 0 containers: []
	W1202 21:16:48.985866  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:48.985872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:48.985930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:49.014392  313474 cri.go:89] found id: ""
	I1202 21:16:49.014405  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.014412  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:49.014418  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:49.014478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:49.038987  313474 cri.go:89] found id: ""
	I1202 21:16:49.039000  313474 logs.go:282] 0 containers: []
	W1202 21:16:49.039006  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:49.039014  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:49.039024  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:49.102227  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:49.102246  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:49.120563  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:49.120579  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:49.183266  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:49.175299   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.176040   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.177692   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.178265   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:49.179815   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:49.183286  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:49.183297  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:49.246439  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:49.246458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:51.775321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:51.785184  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:51.785254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:51.809810  313474 cri.go:89] found id: ""
	I1202 21:16:51.809824  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.809831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:51.809837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:51.809900  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:51.835767  313474 cri.go:89] found id: ""
	I1202 21:16:51.835795  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.835802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:51.835808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:51.835866  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:51.865885  313474 cri.go:89] found id: ""
	I1202 21:16:51.865900  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.865914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:51.865920  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:51.865980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:51.891809  313474 cri.go:89] found id: ""
	I1202 21:16:51.891823  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.891831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:51.891837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:51.891898  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:51.916253  313474 cri.go:89] found id: ""
	I1202 21:16:51.916267  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.916274  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:51.916280  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:51.916349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:51.941007  313474 cri.go:89] found id: ""
	I1202 21:16:51.941021  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.941028  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:51.941034  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:51.941093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:51.969353  313474 cri.go:89] found id: ""
	I1202 21:16:51.969368  313474 logs.go:282] 0 containers: []
	W1202 21:16:51.969375  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:51.969382  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:51.969393  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:52.025261  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:52.025287  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:52.045534  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:52.045551  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:52.124972  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:52.117298   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.117769   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119332   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.119874   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:52.121486   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:52.124982  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:52.124993  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:52.189351  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:52.189372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:54.721393  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:54.732232  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:54.732290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:54.757594  313474 cri.go:89] found id: ""
	I1202 21:16:54.757608  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.757630  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:54.757671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:54.757734  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:54.783381  313474 cri.go:89] found id: ""
	I1202 21:16:54.783395  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.783402  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:54.783407  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:54.783480  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:54.808177  313474 cri.go:89] found id: ""
	I1202 21:16:54.808198  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.808205  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:54.808211  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:54.808291  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:54.831293  313474 cri.go:89] found id: ""
	I1202 21:16:54.831307  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.831314  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:54.831331  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:54.831399  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:54.854343  313474 cri.go:89] found id: ""
	I1202 21:16:54.854357  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.854363  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:54.854368  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:54.854427  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:54.882636  313474 cri.go:89] found id: ""
	I1202 21:16:54.882650  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.882667  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:54.882673  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:54.882739  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:54.911098  313474 cri.go:89] found id: ""
	I1202 21:16:54.911112  313474 logs.go:282] 0 containers: []
	W1202 21:16:54.911120  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:54.911128  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:54.911138  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:54.970728  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:54.970746  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:16:54.986382  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:54.986399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:55.069421  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:55.058528   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.059675   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.060854   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.061730   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:55.063013   12053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:55.069437  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:55.069448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:55.151228  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:55.151266  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:57.687319  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:16:57.696959  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:16:57.697017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:16:57.720719  313474 cri.go:89] found id: ""
	I1202 21:16:57.720733  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.720740  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:16:57.720746  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:16:57.720811  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:16:57.749778  313474 cri.go:89] found id: ""
	I1202 21:16:57.749792  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.749800  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:16:57.749805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:16:57.749863  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:16:57.772871  313474 cri.go:89] found id: ""
	I1202 21:16:57.772884  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.772891  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:16:57.772896  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:16:57.772954  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:16:57.799916  313474 cri.go:89] found id: ""
	I1202 21:16:57.799931  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.799937  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:16:57.799943  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:16:57.800000  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:16:57.827165  313474 cri.go:89] found id: ""
	I1202 21:16:57.827179  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.827186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:16:57.827191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:16:57.827248  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:16:57.852136  313474 cri.go:89] found id: ""
	I1202 21:16:57.852150  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.852157  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:16:57.852166  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:16:57.852222  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:16:57.876624  313474 cri.go:89] found id: ""
	I1202 21:16:57.876638  313474 logs.go:282] 0 containers: []
	W1202 21:16:57.876645  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:16:57.876654  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:16:57.876664  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:16:57.940462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:16:57.932401   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.933065   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.934751   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.935358   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:16:57.936935   12153 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:16:57.940473  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:16:57.940483  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:16:58.004519  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:16:58.004544  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:16:58.036463  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:16:58.036479  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:16:58.096205  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:16:58.096223  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.618984  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:00.629839  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:00.629906  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:00.661470  313474 cri.go:89] found id: ""
	I1202 21:17:00.661490  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.661498  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:00.661505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:00.661578  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:00.689166  313474 cri.go:89] found id: ""
	I1202 21:17:00.689182  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.689189  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:00.689202  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:00.689273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:00.716048  313474 cri.go:89] found id: ""
	I1202 21:17:00.716063  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.716070  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:00.716076  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:00.716143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:00.748003  313474 cri.go:89] found id: ""
	I1202 21:17:00.748017  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.748025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:00.748030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:00.748093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:00.779207  313474 cri.go:89] found id: ""
	I1202 21:17:00.779223  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.779231  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:00.779238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:00.779312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:00.805166  313474 cri.go:89] found id: ""
	I1202 21:17:00.805184  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.805194  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:00.805200  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:00.805273  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:00.832311  313474 cri.go:89] found id: ""
	I1202 21:17:00.832326  313474 logs.go:282] 0 containers: []
	W1202 21:17:00.832333  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:00.832342  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:00.832352  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:00.889599  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:00.889625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:00.906214  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:00.906230  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:00.978709  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:00.969088   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.970474   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.971348   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973000   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:00.973319   12261 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:00.978720  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:00.978734  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:01.044083  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:01.044105  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.609427  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:03.620657  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:03.620726  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:03.651829  313474 cri.go:89] found id: ""
	I1202 21:17:03.651844  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.651851  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:03.651857  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:03.651923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:03.678868  313474 cri.go:89] found id: ""
	I1202 21:17:03.678889  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.678896  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:03.678902  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:03.678969  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:03.708792  313474 cri.go:89] found id: ""
	I1202 21:17:03.708806  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.708814  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:03.708820  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:03.708883  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:03.738501  313474 cri.go:89] found id: ""
	I1202 21:17:03.738516  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.738524  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:03.738531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:03.738604  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:03.770026  313474 cri.go:89] found id: ""
	I1202 21:17:03.770050  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.770057  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:03.770063  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:03.770127  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:03.804285  313474 cri.go:89] found id: ""
	I1202 21:17:03.804300  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.804308  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:03.804324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:03.804391  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:03.831572  313474 cri.go:89] found id: ""
	I1202 21:17:03.831587  313474 logs.go:282] 0 containers: []
	W1202 21:17:03.831594  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:03.831602  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:03.831613  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:03.860060  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:03.860086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:03.921719  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:03.921744  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:03.939033  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:03.939051  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:04.010810  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:03.998480   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:03.999337   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001085   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.001461   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:04.006454   12378 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:04.010823  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:04.010835  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:06.576791  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:06.587693  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:06.587761  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:06.614477  313474 cri.go:89] found id: ""
	I1202 21:17:06.614493  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.614500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:06.614506  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:06.614571  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:06.641625  313474 cri.go:89] found id: ""
	I1202 21:17:06.641639  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.641646  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:06.641670  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:06.641735  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:06.667567  313474 cri.go:89] found id: ""
	I1202 21:17:06.667581  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.667588  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:06.667594  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:06.667657  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:06.694684  313474 cri.go:89] found id: ""
	I1202 21:17:06.694699  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.694706  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:06.694711  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:06.694777  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:06.723071  313474 cri.go:89] found id: ""
	I1202 21:17:06.723090  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.723097  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:06.723103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:06.723185  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:06.751448  313474 cri.go:89] found id: ""
	I1202 21:17:06.751462  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.751469  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:06.751476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:06.751544  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:06.781674  313474 cri.go:89] found id: ""
	I1202 21:17:06.781689  313474 logs.go:282] 0 containers: []
	W1202 21:17:06.781697  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:06.781705  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:06.781723  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:06.812650  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:06.812669  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:06.874390  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:06.874410  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:06.891708  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:06.891726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:06.960203  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:06.952388   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.952955   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954509   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.954979   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:06.956555   12479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:06.960213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:06.960225  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:09.527222  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:09.537303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:09.537380  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:09.562091  313474 cri.go:89] found id: ""
	I1202 21:17:09.562112  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.562120  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:09.562125  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:09.562188  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:09.587772  313474 cri.go:89] found id: ""
	I1202 21:17:09.587786  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.587802  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:09.587808  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:09.587876  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:09.613205  313474 cri.go:89] found id: ""
	I1202 21:17:09.613224  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.613232  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:09.613238  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:09.613298  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:09.639556  313474 cri.go:89] found id: ""
	I1202 21:17:09.639570  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.639577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:09.639583  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:09.639648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:09.668717  313474 cri.go:89] found id: ""
	I1202 21:17:09.668731  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.668737  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:09.668743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:09.668800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:09.692671  313474 cri.go:89] found id: ""
	I1202 21:17:09.692685  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.692693  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:09.692698  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:09.692756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:09.717454  313474 cri.go:89] found id: ""
	I1202 21:17:09.717468  313474 logs.go:282] 0 containers: []
	W1202 21:17:09.717475  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:09.717484  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:09.717494  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:09.747114  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:09.747130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:09.803274  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:09.803294  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:09.819246  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:09.819264  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:09.879465  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:09.872021   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.872397   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874008   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.874550   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:09.876009   12583 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:09.879474  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:09.879485  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.443298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:12.453026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:12.453087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:12.479471  313474 cri.go:89] found id: ""
	I1202 21:17:12.479485  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.479492  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:12.479498  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:12.479559  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:12.503554  313474 cri.go:89] found id: ""
	I1202 21:17:12.503567  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.503575  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:12.503580  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:12.503637  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:12.528839  313474 cri.go:89] found id: ""
	I1202 21:17:12.528854  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.528861  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:12.528866  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:12.528943  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:12.553622  313474 cri.go:89] found id: ""
	I1202 21:17:12.553644  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.553663  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:12.553669  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:12.553737  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:12.579503  313474 cri.go:89] found id: ""
	I1202 21:17:12.579516  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.579523  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:12.579528  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:12.579583  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:12.612312  313474 cri.go:89] found id: ""
	I1202 21:17:12.612327  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.612334  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:12.612339  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:12.612413  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:12.636613  313474 cri.go:89] found id: ""
	I1202 21:17:12.636628  313474 logs.go:282] 0 containers: []
	W1202 21:17:12.636635  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:12.636642  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:12.636652  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:12.696881  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:12.689031   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.689585   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691223   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.691645   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:12.693045   12670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:12.696892  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:12.696903  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:12.758877  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:12.758898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:12.786233  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:12.786249  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:12.841290  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:12.841308  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.357945  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:15.367765  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:15.367824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:15.395603  313474 cri.go:89] found id: ""
	I1202 21:17:15.395617  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.395624  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:15.395629  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:15.395688  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:15.418671  313474 cri.go:89] found id: ""
	I1202 21:17:15.418684  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.418691  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:15.418705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:15.418763  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:15.442594  313474 cri.go:89] found id: ""
	I1202 21:17:15.442607  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.442615  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:15.442624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:15.442680  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:15.466331  313474 cri.go:89] found id: ""
	I1202 21:17:15.466345  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.466352  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:15.466357  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:15.466416  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:15.491762  313474 cri.go:89] found id: ""
	I1202 21:17:15.491775  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.491782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:15.491788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:15.491847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:15.517473  313474 cri.go:89] found id: ""
	I1202 21:17:15.517487  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.517503  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:15.517509  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:15.517577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:15.544100  313474 cri.go:89] found id: ""
	I1202 21:17:15.544122  313474 logs.go:282] 0 containers: []
	W1202 21:17:15.544129  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:15.544138  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:15.544148  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:15.570436  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:15.570453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:15.625879  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:15.625897  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:15.641070  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:15.641091  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:15.704897  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:15.696064   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.696969   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698638   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.698933   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:15.701123   12793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:15.704906  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:15.704916  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:18.272120  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:18.282411  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:18.282474  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:18.310144  313474 cri.go:89] found id: ""
	I1202 21:17:18.310158  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.310165  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:18.310170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:18.310230  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:18.339624  313474 cri.go:89] found id: ""
	I1202 21:17:18.339637  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.339645  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:18.339650  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:18.339709  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:18.367236  313474 cri.go:89] found id: ""
	I1202 21:17:18.367252  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.367259  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:18.367265  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:18.367323  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:18.391197  313474 cri.go:89] found id: ""
	I1202 21:17:18.391213  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.391220  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:18.391226  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:18.391285  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:18.419753  313474 cri.go:89] found id: ""
	I1202 21:17:18.419768  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.419775  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:18.419780  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:18.419841  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:18.445569  313474 cri.go:89] found id: ""
	I1202 21:17:18.445588  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.445596  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:18.445601  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:18.445689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:18.471844  313474 cri.go:89] found id: ""
	I1202 21:17:18.471858  313474 logs.go:282] 0 containers: []
	W1202 21:17:18.471865  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:18.471882  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:18.471893  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:18.500607  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:18.500623  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:18.556521  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:18.556540  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:18.572100  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:18.572115  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:18.637389  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:18.628942   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.629878   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.631639   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.632186   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:18.633838   12902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:18.637399  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:18.637419  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:21.200861  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:21.210744  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:21.210815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:21.235330  313474 cri.go:89] found id: ""
	I1202 21:17:21.235344  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.235351  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:21.235356  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:21.235412  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:21.263273  313474 cri.go:89] found id: ""
	I1202 21:17:21.263287  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.263294  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:21.263299  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:21.263358  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:21.295429  313474 cri.go:89] found id: ""
	I1202 21:17:21.295443  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.295450  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:21.295455  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:21.295522  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:21.339988  313474 cri.go:89] found id: ""
	I1202 21:17:21.340017  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.340025  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:21.340031  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:21.340094  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:21.366146  313474 cri.go:89] found id: ""
	I1202 21:17:21.366159  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.366166  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:21.366171  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:21.366234  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:21.396896  313474 cri.go:89] found id: ""
	I1202 21:17:21.396910  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.396917  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:21.396922  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:21.396980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:21.424236  313474 cri.go:89] found id: ""
	I1202 21:17:21.424249  313474 logs.go:282] 0 containers: []
	W1202 21:17:21.424256  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:21.424273  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:21.424284  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:21.452897  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:21.452913  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:21.511384  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:21.511402  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:21.527095  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:21.527121  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:21.587938  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:21.579696   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.580462   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582330   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.582881   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:21.584480   13005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:21.587948  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:21.587958  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:24.156062  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:24.166297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:24.166383  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:24.194537  313474 cri.go:89] found id: ""
	I1202 21:17:24.194550  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.194558  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:24.194564  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:24.194624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:24.218699  313474 cri.go:89] found id: ""
	I1202 21:17:24.218714  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.218728  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:24.218734  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:24.218796  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:24.244266  313474 cri.go:89] found id: ""
	I1202 21:17:24.244280  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.244287  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:24.244292  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:24.244352  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:24.269104  313474 cri.go:89] found id: ""
	I1202 21:17:24.269117  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.269124  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:24.269129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:24.269186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:24.296650  313474 cri.go:89] found id: ""
	I1202 21:17:24.296663  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.296671  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:24.296677  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:24.296745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:24.323551  313474 cri.go:89] found id: ""
	I1202 21:17:24.323564  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.323572  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:24.323579  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:24.323648  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:24.353085  313474 cri.go:89] found id: ""
	I1202 21:17:24.353109  313474 logs.go:282] 0 containers: []
	W1202 21:17:24.353117  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:24.353126  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:24.353136  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:24.382045  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:24.382062  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:24.438756  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:24.438773  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:24.454650  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:24.454665  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:24.517340  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:24.509295   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.509909   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.511497   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.512110   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:24.513756   13108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:24.517351  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:24.517371  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.081832  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:27.091605  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:27.091662  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:27.116713  313474 cri.go:89] found id: ""
	I1202 21:17:27.116726  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.116734  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:27.116739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:27.116801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:27.140809  313474 cri.go:89] found id: ""
	I1202 21:17:27.140823  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.140830  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:27.140835  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:27.140918  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:27.167221  313474 cri.go:89] found id: ""
	I1202 21:17:27.167235  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.167242  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:27.167247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:27.167302  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:27.191660  313474 cri.go:89] found id: ""
	I1202 21:17:27.191674  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.191681  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:27.191686  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:27.191755  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:27.219696  313474 cri.go:89] found id: ""
	I1202 21:17:27.219719  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.219727  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:27.219732  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:27.219801  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:27.247486  313474 cri.go:89] found id: ""
	I1202 21:17:27.247499  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.247506  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:27.247512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:27.247572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:27.270666  313474 cri.go:89] found id: ""
	I1202 21:17:27.270679  313474 logs.go:282] 0 containers: []
	W1202 21:17:27.270687  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:27.270695  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:27.270704  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:27.329329  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:27.329349  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:27.350719  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:27.350735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:27.420274  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:27.411429   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.412136   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.413912   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.414487   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:27.416006   13205 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:27.420285  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:27.420338  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:27.487442  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:27.487462  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:30.014027  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:30.043373  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:30.043450  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:30.070998  313474 cri.go:89] found id: ""
	I1202 21:17:30.071012  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.071020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:30.071026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:30.071090  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:30.100616  313474 cri.go:89] found id: ""
	I1202 21:17:30.100630  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.100643  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:30.100649  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:30.100710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:30.130598  313474 cri.go:89] found id: ""
	I1202 21:17:30.130612  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.130620  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:30.130626  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:30.130687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:30.157465  313474 cri.go:89] found id: ""
	I1202 21:17:30.157479  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.157486  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:30.157492  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:30.157550  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:30.182842  313474 cri.go:89] found id: ""
	I1202 21:17:30.182857  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.182864  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:30.182870  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:30.182930  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:30.211948  313474 cri.go:89] found id: ""
	I1202 21:17:30.211962  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.211969  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:30.211975  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:30.212034  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:30.240992  313474 cri.go:89] found id: ""
	I1202 21:17:30.241006  313474 logs.go:282] 0 containers: []
	W1202 21:17:30.241013  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:30.241020  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:30.241031  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:30.296604  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:30.296621  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:30.314431  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:30.314447  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:30.385351  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:30.377549   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.378411   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.379961   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.380269   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:30.381891   13309 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:30.385362  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:30.385372  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:30.451748  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:30.451771  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:32.983767  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:32.993977  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:32.994037  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:33.020180  313474 cri.go:89] found id: ""
	I1202 21:17:33.020195  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.020202  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:33.020208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:33.020280  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:33.048366  313474 cri.go:89] found id: ""
	I1202 21:17:33.048379  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.048386  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:33.048392  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:33.048453  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:33.075220  313474 cri.go:89] found id: ""
	I1202 21:17:33.075240  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.075247  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:33.075253  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:33.075326  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:33.099808  313474 cri.go:89] found id: ""
	I1202 21:17:33.099823  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.099831  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:33.099837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:33.099897  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:33.124213  313474 cri.go:89] found id: ""
	I1202 21:17:33.124226  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.124233  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:33.124239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:33.124297  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:33.150102  313474 cri.go:89] found id: ""
	I1202 21:17:33.150116  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.150123  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:33.150129  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:33.150190  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:33.174754  313474 cri.go:89] found id: ""
	I1202 21:17:33.174768  313474 logs.go:282] 0 containers: []
	W1202 21:17:33.174775  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:33.174784  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:33.174794  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:33.243781  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:33.236366   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.236709   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238184   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.238579   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:33.240086   13405 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:33.243791  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:33.243802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:33.306573  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:33.306592  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:33.336859  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:33.336876  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:33.398386  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:33.398404  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:35.914658  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:35.924718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:35.924778  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:35.950094  313474 cri.go:89] found id: ""
	I1202 21:17:35.950108  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.950114  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:35.950120  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:35.950182  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:35.974633  313474 cri.go:89] found id: ""
	I1202 21:17:35.974647  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.974654  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:35.974660  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:35.974719  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:35.998845  313474 cri.go:89] found id: ""
	I1202 21:17:35.998859  313474 logs.go:282] 0 containers: []
	W1202 21:17:35.998866  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:35.998872  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:35.998933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:36.027158  313474 cri.go:89] found id: ""
	I1202 21:17:36.027173  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.027186  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:36.027192  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:36.027259  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:36.052916  313474 cri.go:89] found id: ""
	I1202 21:17:36.052930  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.052937  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:36.052942  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:36.053002  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:36.078331  313474 cri.go:89] found id: ""
	I1202 21:17:36.078345  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.078353  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:36.078359  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:36.078421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:36.102917  313474 cri.go:89] found id: ""
	I1202 21:17:36.102935  313474 logs.go:282] 0 containers: []
	W1202 21:17:36.102942  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:36.102952  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:36.102968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:36.170369  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:36.162878   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.163399   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.164907   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.165325   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:36.166819   13506 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:36.170381  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:36.170396  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:36.233123  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:36.233141  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:36.260318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:36.260336  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:36.318506  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:36.318525  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:38.836941  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:38.847151  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:38.847224  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:38.875586  313474 cri.go:89] found id: ""
	I1202 21:17:38.875599  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.875606  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:38.875612  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:38.875671  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:38.898500  313474 cri.go:89] found id: ""
	I1202 21:17:38.898514  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.898530  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:38.898538  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:38.898601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:38.922709  313474 cri.go:89] found id: ""
	I1202 21:17:38.922723  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.922730  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:38.922735  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:38.922791  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:38.950687  313474 cri.go:89] found id: ""
	I1202 21:17:38.950701  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.950717  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:38.950723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:38.950789  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:38.973477  313474 cri.go:89] found id: ""
	I1202 21:17:38.973490  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.973506  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:38.973514  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:38.973590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:38.999179  313474 cri.go:89] found id: ""
	I1202 21:17:38.999193  313474 logs.go:282] 0 containers: []
	W1202 21:17:38.999200  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:38.999206  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:38.999264  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:39.028981  313474 cri.go:89] found id: ""
	I1202 21:17:39.028995  313474 logs.go:282] 0 containers: []
	W1202 21:17:39.029002  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:39.029010  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:39.029019  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:39.091914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:39.091935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:39.118017  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:39.118033  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:39.174784  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:39.174803  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:39.190239  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:39.190254  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:39.253019  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:39.244615   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.245484   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247253   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.247889   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:39.249431   13630 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:41.753253  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:41.763094  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:41.763167  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:41.787441  313474 cri.go:89] found id: ""
	I1202 21:17:41.787457  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.787464  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:41.787470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:41.787529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:41.815733  313474 cri.go:89] found id: ""
	I1202 21:17:41.815746  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.815753  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:41.815759  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:41.815819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:41.839039  313474 cri.go:89] found id: ""
	I1202 21:17:41.839053  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.839060  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:41.839065  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:41.839125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:41.867760  313474 cri.go:89] found id: ""
	I1202 21:17:41.867775  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.867783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:41.867796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:41.867860  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:41.894114  313474 cri.go:89] found id: ""
	I1202 21:17:41.894128  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.894135  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:41.894141  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:41.894202  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:41.918156  313474 cri.go:89] found id: ""
	I1202 21:17:41.918169  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.918177  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:41.918182  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:41.918242  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:41.942010  313474 cri.go:89] found id: ""
	I1202 21:17:41.942024  313474 logs.go:282] 0 containers: []
	W1202 21:17:41.942032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:41.942040  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:41.942050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:41.971871  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:41.971886  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:42.031586  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:42.031606  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:42.050658  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:42.050675  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:42.125237  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:42.114951   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.115932   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118118   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.118731   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:42.120706   13734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:42.125249  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:42.125260  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:44.696530  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:44.706544  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:44.706605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:44.734450  313474 cri.go:89] found id: ""
	I1202 21:17:44.734464  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.734470  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:44.734476  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:44.734535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:44.758091  313474 cri.go:89] found id: ""
	I1202 21:17:44.758104  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.758111  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:44.758116  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:44.758178  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:44.782611  313474 cri.go:89] found id: ""
	I1202 21:17:44.782624  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.782631  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:44.782637  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:44.782700  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:44.806667  313474 cri.go:89] found id: ""
	I1202 21:17:44.806681  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.806689  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:44.806695  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:44.806757  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:44.830007  313474 cri.go:89] found id: ""
	I1202 21:17:44.830021  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.830031  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:44.830036  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:44.830098  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:44.853880  313474 cri.go:89] found id: ""
	I1202 21:17:44.853894  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.853901  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:44.853907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:44.853970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:44.878619  313474 cri.go:89] found id: ""
	I1202 21:17:44.878633  313474 logs.go:282] 0 containers: []
	W1202 21:17:44.878640  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:44.878647  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:44.878657  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:44.894269  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:44.894286  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:44.959621  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:44.952378   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.952780   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954251   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.954543   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:44.956016   13828 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:44.959632  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:44.959645  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:45.023289  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:45.023311  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:45.085458  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:45.085476  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.687794  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:47.697486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:47.697557  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:47.723246  313474 cri.go:89] found id: ""
	I1202 21:17:47.723259  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.723266  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:47.723272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:47.723329  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:47.746713  313474 cri.go:89] found id: ""
	I1202 21:17:47.746726  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.746733  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:47.746739  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:47.746798  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:47.771766  313474 cri.go:89] found id: ""
	I1202 21:17:47.771779  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.771786  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:47.771791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:47.771847  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:47.795263  313474 cri.go:89] found id: ""
	I1202 21:17:47.795277  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.795284  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:47.795289  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:47.795349  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:47.824522  313474 cri.go:89] found id: ""
	I1202 21:17:47.824536  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.824543  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:47.824548  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:47.824610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:47.849074  313474 cri.go:89] found id: ""
	I1202 21:17:47.849089  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.849096  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:47.849102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:47.849163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:47.878497  313474 cri.go:89] found id: ""
	I1202 21:17:47.878512  313474 logs.go:282] 0 containers: []
	W1202 21:17:47.878518  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:47.878526  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:47.878537  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:47.935644  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:47.935663  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:47.951723  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:47.951739  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:48.020401  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:48.011900   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.012882   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.014694   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.015052   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:48.016693   13934 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:48.020422  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:48.020434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:48.090722  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:48.090751  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.621799  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:50.631705  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:50.631774  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:50.656209  313474 cri.go:89] found id: ""
	I1202 21:17:50.656223  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.656230  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:50.656235  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:50.656300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:50.680929  313474 cri.go:89] found id: ""
	I1202 21:17:50.680943  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.680950  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:50.680955  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:50.681014  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:50.705769  313474 cri.go:89] found id: ""
	I1202 21:17:50.705783  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.705790  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:50.705796  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:50.705858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:50.731506  313474 cri.go:89] found id: ""
	I1202 21:17:50.731519  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.731526  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:50.731531  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:50.731588  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:50.754334  313474 cri.go:89] found id: ""
	I1202 21:17:50.754347  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.754354  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:50.754360  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:50.754421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:50.778142  313474 cri.go:89] found id: ""
	I1202 21:17:50.778154  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.778162  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:50.778170  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:50.778228  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:50.801859  313474 cri.go:89] found id: ""
	I1202 21:17:50.801872  313474 logs.go:282] 0 containers: []
	W1202 21:17:50.801880  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:50.801887  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:50.801898  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:50.862528  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:50.854527   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.855204   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.856801   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.857287   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:50.858805   14032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:50.862542  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:50.862553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:50.928955  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:50.928974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:50.960442  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:50.960458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:51.018671  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:51.018690  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.535533  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:53.550193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:53.550254  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:53.579796  313474 cri.go:89] found id: ""
	I1202 21:17:53.579810  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.579817  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:53.579823  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:53.579885  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:53.606043  313474 cri.go:89] found id: ""
	I1202 21:17:53.606057  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.606063  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:53.606069  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:53.606125  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:53.631276  313474 cri.go:89] found id: ""
	I1202 21:17:53.631290  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.631297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:53.631303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:53.631360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:53.662387  313474 cri.go:89] found id: ""
	I1202 21:17:53.662400  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.662407  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:53.662412  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:53.662467  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:53.686744  313474 cri.go:89] found id: ""
	I1202 21:17:53.686758  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.686765  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:53.686771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:53.686832  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:53.710015  313474 cri.go:89] found id: ""
	I1202 21:17:53.710028  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.710035  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:53.710046  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:53.710102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:53.733042  313474 cri.go:89] found id: ""
	I1202 21:17:53.733056  313474 logs.go:282] 0 containers: []
	W1202 21:17:53.733068  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:53.733076  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:53.733088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:53.789666  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:53.789726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:53.805097  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:53.805113  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:53.871790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:53.864429   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.865010   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866541   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.866977   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:53.868406   14142 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:53.871801  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:53.871813  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:53.935260  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:53.935279  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:56.466348  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:56.476763  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:56.476830  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:56.501775  313474 cri.go:89] found id: ""
	I1202 21:17:56.501789  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.501795  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:56.501801  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:56.501861  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:56.526404  313474 cri.go:89] found id: ""
	I1202 21:17:56.526417  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.526424  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:56.526429  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:56.526487  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:56.555809  313474 cri.go:89] found id: ""
	I1202 21:17:56.555823  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.555845  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:56.555852  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:56.555923  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:56.586754  313474 cri.go:89] found id: ""
	I1202 21:17:56.586767  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.586794  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:56.586803  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:56.586871  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:56.612048  313474 cri.go:89] found id: ""
	I1202 21:17:56.612061  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.612068  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:56.612074  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:56.612134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:56.636363  313474 cri.go:89] found id: ""
	I1202 21:17:56.636376  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.636383  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:56.636399  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:56.636456  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:56.668372  313474 cri.go:89] found id: ""
	I1202 21:17:56.668393  313474 logs.go:282] 0 containers: []
	W1202 21:17:56.668400  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:56.668409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:56.668418  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:56.724439  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:56.724458  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:56.740142  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:56.740161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:56.802960  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:56.795097   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.796001   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.797561   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.798025   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:56.799523   14248 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:17:56.802970  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:56.802981  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:56.870497  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:56.870516  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.400859  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:17:59.410723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:17:59.410792  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:17:59.434739  313474 cri.go:89] found id: ""
	I1202 21:17:59.434754  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.434761  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:17:59.434766  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:17:59.434823  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:17:59.459718  313474 cri.go:89] found id: ""
	I1202 21:17:59.459731  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.459738  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:17:59.459743  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:17:59.459800  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:17:59.484078  313474 cri.go:89] found id: ""
	I1202 21:17:59.484091  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.484098  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:17:59.484103  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:17:59.484161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:17:59.510484  313474 cri.go:89] found id: ""
	I1202 21:17:59.510498  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.510505  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:17:59.510510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:17:59.510569  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:17:59.535191  313474 cri.go:89] found id: ""
	I1202 21:17:59.535204  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.535211  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:17:59.535217  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:17:59.535278  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:17:59.566496  313474 cri.go:89] found id: ""
	I1202 21:17:59.566509  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.566516  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:17:59.566522  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:17:59.566591  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:17:59.605449  313474 cri.go:89] found id: ""
	I1202 21:17:59.605463  313474 logs.go:282] 0 containers: []
	W1202 21:17:59.605470  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:17:59.605479  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:17:59.605492  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:17:59.670641  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:17:59.670659  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:17:59.698362  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:17:59.698378  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:17:59.755057  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:17:59.755075  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:17:59.771334  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:17:59.771350  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:17:59.833359  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:17:59.825268   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.826042   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.827699   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.828304   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:17:59.830013   14364 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.334350  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:02.344576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:02.344646  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:02.372330  313474 cri.go:89] found id: ""
	I1202 21:18:02.372347  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.372355  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:02.372361  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:02.372421  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:02.403621  313474 cri.go:89] found id: ""
	I1202 21:18:02.403635  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.403642  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:02.403648  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:02.403710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:02.432672  313474 cri.go:89] found id: ""
	I1202 21:18:02.432686  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.432693  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:02.432700  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:02.432762  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:02.464631  313474 cri.go:89] found id: ""
	I1202 21:18:02.464645  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.464652  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:02.464658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:02.464720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:02.491546  313474 cri.go:89] found id: ""
	I1202 21:18:02.491559  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.491566  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:02.491572  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:02.491628  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:02.515275  313474 cri.go:89] found id: ""
	I1202 21:18:02.515289  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.515296  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:02.515301  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:02.515361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:02.542560  313474 cri.go:89] found id: ""
	I1202 21:18:02.542574  313474 logs.go:282] 0 containers: []
	W1202 21:18:02.542581  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:02.542589  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:02.542599  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:02.602107  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:02.602123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:02.624739  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:02.624757  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:02.689790  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:02.681842   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.682258   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.683537   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.684226   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:02.686056   14457 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:02.689808  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:02.689819  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:02.752499  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:02.752518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.283528  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:05.293718  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:05.293787  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:05.317745  313474 cri.go:89] found id: ""
	I1202 21:18:05.317758  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.317764  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:05.317770  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:05.317825  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:05.342721  313474 cri.go:89] found id: ""
	I1202 21:18:05.342735  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.342742  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:05.342747  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:05.342805  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:05.367273  313474 cri.go:89] found id: ""
	I1202 21:18:05.367295  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.367303  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:05.367311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:05.367374  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:05.392617  313474 cri.go:89] found id: ""
	I1202 21:18:05.392630  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.392639  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:05.392644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:05.392720  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:05.416853  313474 cri.go:89] found id: ""
	I1202 21:18:05.416866  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.416873  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:05.416878  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:05.416939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:05.440831  313474 cri.go:89] found id: ""
	I1202 21:18:05.440845  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.440852  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:05.440858  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:05.440925  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:05.468689  313474 cri.go:89] found id: ""
	I1202 21:18:05.468702  313474 logs.go:282] 0 containers: []
	W1202 21:18:05.468709  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:05.468718  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:05.468728  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:05.532922  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:05.524825   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.525211   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.526892   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.527288   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:05.529015   14552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:05.532931  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:05.532956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:05.603067  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:05.603086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:05.634107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:05.634125  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:05.690509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:05.690527  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.208420  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:08.218671  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:08.218745  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:08.244809  313474 cri.go:89] found id: ""
	I1202 21:18:08.244823  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.244831  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:08.244837  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:08.244895  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:08.270054  313474 cri.go:89] found id: ""
	I1202 21:18:08.270068  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.270075  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:08.270080  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:08.270145  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:08.295277  313474 cri.go:89] found id: ""
	I1202 21:18:08.295291  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.295298  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:08.295304  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:08.295366  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:08.319112  313474 cri.go:89] found id: ""
	I1202 21:18:08.319125  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.319132  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:08.319138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:08.319205  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:08.342874  313474 cri.go:89] found id: ""
	I1202 21:18:08.342888  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.342901  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:08.342908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:08.342965  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:08.371370  313474 cri.go:89] found id: ""
	I1202 21:18:08.371384  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.371391  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:08.371397  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:08.371464  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:08.396154  313474 cri.go:89] found id: ""
	I1202 21:18:08.396167  313474 logs.go:282] 0 containers: []
	W1202 21:18:08.396175  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:08.396183  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:08.396193  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:08.451337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:08.451356  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:08.466550  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:08.466565  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:08.528549  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:08.520562   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.521190   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.522827   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.523353   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:08.525032   14663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:08.528558  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:08.528569  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:08.606008  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:08.606028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:11.138262  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:11.148937  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:11.148998  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:11.173696  313474 cri.go:89] found id: ""
	I1202 21:18:11.173710  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.173718  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:11.173723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:11.173790  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:11.198792  313474 cri.go:89] found id: ""
	I1202 21:18:11.198805  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.198813  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:11.198818  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:11.198880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:11.222802  313474 cri.go:89] found id: ""
	I1202 21:18:11.222816  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.222823  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:11.222829  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:11.222890  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:11.247731  313474 cri.go:89] found id: ""
	I1202 21:18:11.247745  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.247752  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:11.247757  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:11.247814  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:11.272133  313474 cri.go:89] found id: ""
	I1202 21:18:11.272146  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.272153  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:11.272159  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:11.272217  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:11.296871  313474 cri.go:89] found id: ""
	I1202 21:18:11.296885  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.296892  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:11.296897  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:11.296958  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:11.321716  313474 cri.go:89] found id: ""
	I1202 21:18:11.321729  313474 logs.go:282] 0 containers: []
	W1202 21:18:11.321736  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:11.321744  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:11.321754  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:11.377048  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:11.377066  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:11.393570  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:11.393587  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:11.458188  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:11.449467   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.450311   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.451986   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.452297   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:11.454177   14767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:11.458204  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:11.458220  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:11.525584  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:11.525602  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.058201  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:14.068731  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:14.068793  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:14.095660  313474 cri.go:89] found id: ""
	I1202 21:18:14.095674  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.095682  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:14.095688  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:14.095754  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:14.122077  313474 cri.go:89] found id: ""
	I1202 21:18:14.122090  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.122097  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:14.122102  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:14.122163  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:14.150178  313474 cri.go:89] found id: ""
	I1202 21:18:14.150192  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.150199  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:14.150204  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:14.150265  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:14.175340  313474 cri.go:89] found id: ""
	I1202 21:18:14.175353  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.175360  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:14.175372  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:14.175431  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:14.199105  313474 cri.go:89] found id: ""
	I1202 21:18:14.199118  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.199125  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:14.199130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:14.199187  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:14.224274  313474 cri.go:89] found id: ""
	I1202 21:18:14.224288  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.224295  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:14.224300  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:14.224363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:14.251445  313474 cri.go:89] found id: ""
	I1202 21:18:14.251458  313474 logs.go:282] 0 containers: []
	W1202 21:18:14.251465  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:14.251473  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:14.251487  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:14.320250  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:14.311973   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.312750   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314433   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.314978   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:14.316585   14864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:14.320261  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:14.320274  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:14.383255  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:14.383276  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:14.411409  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:14.411425  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:14.472223  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:14.472248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:16.989804  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:17.000093  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:17.000155  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:17.028092  313474 cri.go:89] found id: ""
	I1202 21:18:17.028116  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.028124  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:17.028130  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:17.028198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:17.052924  313474 cri.go:89] found id: ""
	I1202 21:18:17.052945  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.052952  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:17.052958  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:17.053029  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:17.078703  313474 cri.go:89] found id: ""
	I1202 21:18:17.078727  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.078734  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:17.078742  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:17.078812  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:17.104168  313474 cri.go:89] found id: ""
	I1202 21:18:17.104182  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.104189  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:17.104195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:17.104299  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:17.127996  313474 cri.go:89] found id: ""
	I1202 21:18:17.128010  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.128017  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:17.128023  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:17.128088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:17.152013  313474 cri.go:89] found id: ""
	I1202 21:18:17.152027  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.152034  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:17.152040  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:17.152100  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:17.180838  313474 cri.go:89] found id: ""
	I1202 21:18:17.180853  313474 logs.go:282] 0 containers: []
	W1202 21:18:17.180860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:17.180868  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:17.180878  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:17.208724  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:17.208740  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:17.264017  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:17.264035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:17.280767  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:17.280783  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:17.347738  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:17.340260   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.340861   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342357   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.342869   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:17.344337   14988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:17.347749  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:17.347762  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:19.913786  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:19.923690  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:19.923756  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:19.948485  313474 cri.go:89] found id: ""
	I1202 21:18:19.948499  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.948506  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:19.948512  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:19.948572  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:19.973040  313474 cri.go:89] found id: ""
	I1202 21:18:19.973054  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.973062  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:19.973067  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:19.973129  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:19.997059  313474 cri.go:89] found id: ""
	I1202 21:18:19.997073  313474 logs.go:282] 0 containers: []
	W1202 21:18:19.997080  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:19.997086  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:19.997143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:20.023852  313474 cri.go:89] found id: ""
	I1202 21:18:20.023868  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.023876  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:20.023882  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:20.023963  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:20.050761  313474 cri.go:89] found id: ""
	I1202 21:18:20.050775  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.050782  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:20.050788  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:20.050849  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:20.080281  313474 cri.go:89] found id: ""
	I1202 21:18:20.080299  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.080318  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:20.080324  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:20.080396  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:20.104993  313474 cri.go:89] found id: ""
	I1202 21:18:20.105008  313474 logs.go:282] 0 containers: []
	W1202 21:18:20.105015  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:20.105024  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:20.105035  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:20.165434  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:20.165453  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:20.181890  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:20.181907  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:20.248978  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:20.240575   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.241189   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.242918   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.243424   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:20.244930   15079 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:20.248989  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:20.249000  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:20.310960  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:20.310980  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:22.840884  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:22.851984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:22.852053  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:22.877753  313474 cri.go:89] found id: ""
	I1202 21:18:22.877766  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.877773  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:22.877779  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:22.877837  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:22.906410  313474 cri.go:89] found id: ""
	I1202 21:18:22.906424  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.906431  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:22.906437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:22.906500  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:22.930057  313474 cri.go:89] found id: ""
	I1202 21:18:22.930071  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.930077  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:22.930083  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:22.930143  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:22.953434  313474 cri.go:89] found id: ""
	I1202 21:18:22.953447  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.953454  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:22.953460  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:22.953537  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:22.977521  313474 cri.go:89] found id: ""
	I1202 21:18:22.977534  313474 logs.go:282] 0 containers: []
	W1202 21:18:22.977541  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:22.977546  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:22.977605  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:23.002292  313474 cri.go:89] found id: ""
	I1202 21:18:23.002308  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.002316  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:23.002322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:23.002394  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:23.036373  313474 cri.go:89] found id: ""
	I1202 21:18:23.036387  313474 logs.go:282] 0 containers: []
	W1202 21:18:23.036395  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:23.036403  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:23.036415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:23.095655  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:23.095673  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:23.111535  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:23.111553  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:23.173705  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:23.165173   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.166011   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.167619   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.168221   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:23.169997   15182 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:23.173715  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:23.173726  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:23.236268  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:23.236289  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:25.766078  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:25.775931  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:25.775992  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:25.803734  313474 cri.go:89] found id: ""
	I1202 21:18:25.803748  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.803755  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:25.803761  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:25.803819  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:25.834986  313474 cri.go:89] found id: ""
	I1202 21:18:25.834998  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.835005  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:25.835011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:25.835067  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:25.868893  313474 cri.go:89] found id: ""
	I1202 21:18:25.868906  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.868914  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:25.868919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:25.868978  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:25.893444  313474 cri.go:89] found id: ""
	I1202 21:18:25.893458  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.893465  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:25.893470  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:25.893535  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:25.920960  313474 cri.go:89] found id: ""
	I1202 21:18:25.920981  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.921016  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:25.921022  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:25.921084  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:25.945498  313474 cri.go:89] found id: ""
	I1202 21:18:25.945512  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.945519  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:25.945524  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:25.945584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:25.970324  313474 cri.go:89] found id: ""
	I1202 21:18:25.970338  313474 logs.go:282] 0 containers: []
	W1202 21:18:25.970345  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:25.970352  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:25.970363  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:26.026110  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:26.026130  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:26.042911  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:26.042929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:26.110842  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:26.102647   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.103280   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105091   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.105699   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:26.107315   15285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:26.110852  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:26.110863  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:26.172311  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:26.172331  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.700308  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:28.710060  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:28.710120  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:28.735161  313474 cri.go:89] found id: ""
	I1202 21:18:28.735174  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.735181  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:28.735186  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:28.735244  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:28.759111  313474 cri.go:89] found id: ""
	I1202 21:18:28.759125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.759132  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:28.759138  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:28.759195  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:28.782985  313474 cri.go:89] found id: ""
	I1202 21:18:28.782999  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.783006  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:28.783011  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:28.783069  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:28.820172  313474 cri.go:89] found id: ""
	I1202 21:18:28.820186  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.820203  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:28.820208  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:28.820274  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:28.850833  313474 cri.go:89] found id: ""
	I1202 21:18:28.850846  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.850863  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:28.850869  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:28.850927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:28.882012  313474 cri.go:89] found id: ""
	I1202 21:18:28.882025  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.882032  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:28.882038  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:28.882093  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:28.908111  313474 cri.go:89] found id: ""
	I1202 21:18:28.908125  313474 logs.go:282] 0 containers: []
	W1202 21:18:28.908132  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:28.908139  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:28.908150  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:28.934318  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:28.934333  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:28.989499  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:28.989518  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:29.007046  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:29.007064  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:29.083779  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:29.075539   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.076231   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.077811   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.078418   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:29.080191   15398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:29.083789  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:29.083801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.646079  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:31.657486  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:31.657549  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:31.683678  313474 cri.go:89] found id: ""
	I1202 21:18:31.683692  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.683699  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:31.683704  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:31.683759  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:31.712328  313474 cri.go:89] found id: ""
	I1202 21:18:31.712342  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.712349  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:31.712354  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:31.712410  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:31.736788  313474 cri.go:89] found id: ""
	I1202 21:18:31.736802  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.736808  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:31.736814  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:31.736870  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:31.761882  313474 cri.go:89] found id: ""
	I1202 21:18:31.761896  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.761903  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:31.761908  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:31.761968  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:31.785756  313474 cri.go:89] found id: ""
	I1202 21:18:31.785770  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.785778  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:31.785783  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:31.785843  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:31.820411  313474 cri.go:89] found id: ""
	I1202 21:18:31.820424  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.820431  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:31.820437  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:31.820493  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:31.853589  313474 cri.go:89] found id: ""
	I1202 21:18:31.853603  313474 logs.go:282] 0 containers: []
	W1202 21:18:31.853611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:31.853619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:31.853630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:31.921797  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:31.913330   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.913979   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915473   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.915981   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:31.917835   15483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:31.921807  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:31.921818  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:31.983142  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:31.983161  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:32.019032  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:32.019047  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:32.075826  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:32.075845  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.595298  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:34.606306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:34.606370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:34.629306  313474 cri.go:89] found id: ""
	I1202 21:18:34.629321  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.629328  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:34.629334  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:34.629393  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:34.653285  313474 cri.go:89] found id: ""
	I1202 21:18:34.653299  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.653305  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:34.653311  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:34.653369  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:34.679517  313474 cri.go:89] found id: ""
	I1202 21:18:34.679531  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.679538  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:34.679543  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:34.679601  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:34.703382  313474 cri.go:89] found id: ""
	I1202 21:18:34.703395  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.703403  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:34.703409  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:34.703472  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:34.726696  313474 cri.go:89] found id: ""
	I1202 21:18:34.726710  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.726717  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:34.726723  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:34.726784  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:34.751128  313474 cri.go:89] found id: ""
	I1202 21:18:34.751141  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.751148  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:34.751153  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:34.751213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:34.775011  313474 cri.go:89] found id: ""
	I1202 21:18:34.775025  313474 logs.go:282] 0 containers: []
	W1202 21:18:34.775032  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:34.775047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:34.775057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:34.835694  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:34.835712  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:34.852614  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:34.852628  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:34.915032  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:34.907089   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.907665   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909375   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.909948   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:34.911554   15594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:34.915042  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:34.915053  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:34.976914  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:34.976933  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.512733  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:37.523297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:37.523360  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:37.547452  313474 cri.go:89] found id: ""
	I1202 21:18:37.547471  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.547478  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:37.547484  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:37.547553  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:37.573439  313474 cri.go:89] found id: ""
	I1202 21:18:37.573453  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.573460  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:37.573471  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:37.573529  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:37.597566  313474 cri.go:89] found id: ""
	I1202 21:18:37.597579  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.597586  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:37.597593  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:37.597689  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:37.622743  313474 cri.go:89] found id: ""
	I1202 21:18:37.622757  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.622764  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:37.622769  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:37.622833  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:37.650998  313474 cri.go:89] found id: ""
	I1202 21:18:37.651012  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.651019  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:37.651024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:37.651082  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:37.675113  313474 cri.go:89] found id: ""
	I1202 21:18:37.675126  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.675133  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:37.675139  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:37.675198  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:37.703998  313474 cri.go:89] found id: ""
	I1202 21:18:37.704011  313474 logs.go:282] 0 containers: []
	W1202 21:18:37.704019  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:37.704028  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:37.704039  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:37.731894  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:37.731909  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:37.789286  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:37.789304  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:37.806026  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:37.806041  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:37.883651  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:37.875622   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.876183   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.877815   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.878233   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:37.879787   15712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:37.883661  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:37.883672  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.449584  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:40.459754  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:40.459815  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:40.484277  313474 cri.go:89] found id: ""
	I1202 21:18:40.484290  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.484297  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:40.484303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:40.484363  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:40.512957  313474 cri.go:89] found id: ""
	I1202 21:18:40.512971  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.512978  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:40.512984  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:40.513043  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:40.539344  313474 cri.go:89] found id: ""
	I1202 21:18:40.539357  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.539365  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:40.539371  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:40.539439  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:40.569762  313474 cri.go:89] found id: ""
	I1202 21:18:40.569776  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.569783  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:40.569789  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:40.569865  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:40.599530  313474 cri.go:89] found id: ""
	I1202 21:18:40.599589  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.599597  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:40.599603  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:40.599663  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:40.624508  313474 cri.go:89] found id: ""
	I1202 21:18:40.624521  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.624527  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:40.624533  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:40.624590  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:40.654772  313474 cri.go:89] found id: ""
	I1202 21:18:40.654786  313474 logs.go:282] 0 containers: []
	W1202 21:18:40.654793  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:40.654800  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:40.654811  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:40.671128  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:40.671146  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:40.739442  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:40.731281   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.732035   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.733699   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.734266   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:40.735915   15800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:40.739452  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:40.739465  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:40.802579  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:40.802600  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:40.842887  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:40.842905  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.407132  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:43.417207  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:43.417283  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:43.445187  313474 cri.go:89] found id: ""
	I1202 21:18:43.445201  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.445208  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:43.445214  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:43.445270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:43.469935  313474 cri.go:89] found id: ""
	I1202 21:18:43.469949  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.469957  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:43.469962  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:43.470021  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:43.495370  313474 cri.go:89] found id: ""
	I1202 21:18:43.495383  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.495391  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:43.495396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:43.495454  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:43.519120  313474 cri.go:89] found id: ""
	I1202 21:18:43.519133  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.519149  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:43.519155  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:43.519213  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:43.548201  313474 cri.go:89] found id: ""
	I1202 21:18:43.548216  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.548223  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:43.548228  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:43.548290  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:43.573077  313474 cri.go:89] found id: ""
	I1202 21:18:43.573091  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.573099  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:43.573104  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:43.573166  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:43.598032  313474 cri.go:89] found id: ""
	I1202 21:18:43.598046  313474 logs.go:282] 0 containers: []
	W1202 21:18:43.598053  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:43.598062  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:43.598072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:43.625764  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:43.625780  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:43.681770  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:43.681787  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:43.698012  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:43.698028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:43.764049  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:43.756290   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.756978   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.758602   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.759087   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:43.760588   15917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:43.764060  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:43.764071  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.332493  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:46.342812  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:46.342877  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:46.367988  313474 cri.go:89] found id: ""
	I1202 21:18:46.368002  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.368018  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:46.368024  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:46.368091  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:46.392483  313474 cri.go:89] found id: ""
	I1202 21:18:46.392496  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.392512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:46.392518  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:46.392574  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:46.429495  313474 cri.go:89] found id: ""
	I1202 21:18:46.429514  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.429522  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:46.429527  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:46.429598  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:46.455204  313474 cri.go:89] found id: ""
	I1202 21:18:46.455218  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.455225  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:46.455231  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:46.455295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:46.479783  313474 cri.go:89] found id: ""
	I1202 21:18:46.479800  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.479808  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:46.479813  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:46.479880  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:46.504674  313474 cri.go:89] found id: ""
	I1202 21:18:46.504688  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.504696  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:46.504701  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:46.504767  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:46.534919  313474 cri.go:89] found id: ""
	I1202 21:18:46.534933  313474 logs.go:282] 0 containers: []
	W1202 21:18:46.534940  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:46.534948  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:46.534968  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:46.591507  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:46.591526  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:46.607216  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:46.607233  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:46.672448  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:46.664475   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.665046   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.666657   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.667197   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:46.668631   16011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:46.672459  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:46.672469  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:46.738404  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:46.738424  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.269367  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:49.279307  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:49.279370  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:49.302419  313474 cri.go:89] found id: ""
	I1202 21:18:49.302432  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.302439  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:49.302445  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:49.302501  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:49.328004  313474 cri.go:89] found id: ""
	I1202 21:18:49.328018  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.328025  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:49.328030  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:49.328088  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:49.352661  313474 cri.go:89] found id: ""
	I1202 21:18:49.352675  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.352682  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:49.352687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:49.352746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:49.377363  313474 cri.go:89] found id: ""
	I1202 21:18:49.377376  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.377383  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:49.377389  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:49.377447  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:49.401369  313474 cri.go:89] found id: ""
	I1202 21:18:49.401383  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.401390  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:49.401396  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:49.401461  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:49.425207  313474 cri.go:89] found id: ""
	I1202 21:18:49.425221  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.425228  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:49.425233  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:49.425295  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:49.451589  313474 cri.go:89] found id: ""
	I1202 21:18:49.451604  313474 logs.go:282] 0 containers: []
	W1202 21:18:49.451611  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:49.451619  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:49.451630  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:49.513462  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:49.505690   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.506363   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.507990   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.508509   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:49.510072   16109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:49.513472  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:49.513482  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:49.575782  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:49.575801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:49.610890  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:49.610906  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:49.667106  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:49.667123  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.184506  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:52.194827  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:52.194887  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:52.221289  313474 cri.go:89] found id: ""
	I1202 21:18:52.221303  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.221310  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:52.221315  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:52.221385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:52.247152  313474 cri.go:89] found id: ""
	I1202 21:18:52.247167  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.247174  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:52.247179  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:52.247240  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:52.270523  313474 cri.go:89] found id: ""
	I1202 21:18:52.270539  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.270545  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:52.270550  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:52.270610  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:52.294232  313474 cri.go:89] found id: ""
	I1202 21:18:52.294246  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.294253  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:52.294259  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:52.294321  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:52.322550  313474 cri.go:89] found id: ""
	I1202 21:18:52.322563  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.322570  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:52.322576  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:52.322635  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:52.350081  313474 cri.go:89] found id: ""
	I1202 21:18:52.350095  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.350103  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:52.350110  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:52.350171  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:52.373782  313474 cri.go:89] found id: ""
	I1202 21:18:52.373796  313474 logs.go:282] 0 containers: []
	W1202 21:18:52.373817  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:52.373826  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:52.373836  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:52.429396  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:52.429415  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:52.445303  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:52.445319  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:52.509061  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:52.500762   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.501579   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503214   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.503522   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:52.505017   16219 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:52.509073  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:52.509087  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:52.572171  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:52.572191  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:55.105321  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:55.115684  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:55.115746  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:55.143285  313474 cri.go:89] found id: ""
	I1202 21:18:55.143301  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.143313  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:55.143319  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:55.143379  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:55.168631  313474 cri.go:89] found id: ""
	I1202 21:18:55.168645  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.168652  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:55.168658  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:55.168718  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:55.194277  313474 cri.go:89] found id: ""
	I1202 21:18:55.194290  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.194297  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:55.194303  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:55.194361  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:55.221594  313474 cri.go:89] found id: ""
	I1202 21:18:55.221607  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.221614  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:55.221620  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:55.221738  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:55.245639  313474 cri.go:89] found id: ""
	I1202 21:18:55.245684  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.245691  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:55.245697  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:55.245758  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:55.270064  313474 cri.go:89] found id: ""
	I1202 21:18:55.270078  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.270085  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:55.270091  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:55.270151  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:55.298494  313474 cri.go:89] found id: ""
	I1202 21:18:55.298508  313474 logs.go:282] 0 containers: []
	W1202 21:18:55.298515  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:55.298524  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:55.298534  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:55.354337  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:55.354358  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:55.371291  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:55.371306  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:55.441025  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:55.432197   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.433031   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.434888   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.435565   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:55.437238   16324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:55.441036  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:55.441048  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:55.508470  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:55.508491  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:18:58.040648  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:18:58.052163  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:18:58.052231  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:18:58.082641  313474 cri.go:89] found id: ""
	I1202 21:18:58.082655  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.082663  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:18:58.082668  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:18:58.082727  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:18:58.109547  313474 cri.go:89] found id: ""
	I1202 21:18:58.109561  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.109579  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:18:58.109585  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:18:58.109687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:18:58.134886  313474 cri.go:89] found id: ""
	I1202 21:18:58.134900  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.134908  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:18:58.134913  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:18:58.134973  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:18:58.158535  313474 cri.go:89] found id: ""
	I1202 21:18:58.158549  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.158555  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:18:58.158561  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:18:58.158626  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:18:58.181483  313474 cri.go:89] found id: ""
	I1202 21:18:58.181498  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.181505  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:18:58.181510  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:18:58.181567  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:18:58.207661  313474 cri.go:89] found id: ""
	I1202 21:18:58.207675  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.207682  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:18:58.207687  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:18:58.207744  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:18:58.231079  313474 cri.go:89] found id: ""
	I1202 21:18:58.231092  313474 logs.go:282] 0 containers: []
	W1202 21:18:58.231099  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:18:58.231107  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:18:58.231117  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:18:58.286068  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:18:58.286086  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:18:58.301966  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:18:58.301983  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:18:58.371817  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:18:58.363950   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.364690   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366066   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.366688   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:18:58.368325   16430 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:18:58.371827  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:18:58.371838  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:18:58.434916  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:18:58.434935  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:00.970468  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:00.981089  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:00.981161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:01.007840  313474 cri.go:89] found id: ""
	I1202 21:19:01.007855  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.007863  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:01.007868  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:01.007927  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:01.032203  313474 cri.go:89] found id: ""
	I1202 21:19:01.032217  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.032224  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:01.032229  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:01.032300  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:01.065098  313474 cri.go:89] found id: ""
	I1202 21:19:01.065111  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.065119  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:01.065124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:01.065186  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:01.091481  313474 cri.go:89] found id: ""
	I1202 21:19:01.091495  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.091502  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:01.091508  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:01.091584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:01.119523  313474 cri.go:89] found id: ""
	I1202 21:19:01.119538  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.119546  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:01.119552  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:01.119617  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:01.145559  313474 cri.go:89] found id: ""
	I1202 21:19:01.145574  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.145584  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:01.145590  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:01.145699  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:01.171870  313474 cri.go:89] found id: ""
	I1202 21:19:01.171885  313474 logs.go:282] 0 containers: []
	W1202 21:19:01.171892  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:01.171900  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:01.171929  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:01.236730  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:01.228637   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.229293   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.230833   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.231277   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:01.232768   16526 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:01.236741  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:01.236752  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:01.298712  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:01.298731  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:01.327192  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:01.327213  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:01.382852  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:01.382869  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:03.899143  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:03.908997  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:03.909059  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:03.932688  313474 cri.go:89] found id: ""
	I1202 21:19:03.932701  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.932708  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:03.932714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:03.932773  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:03.957073  313474 cri.go:89] found id: ""
	I1202 21:19:03.957087  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.957095  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:03.957100  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:03.957161  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:03.981206  313474 cri.go:89] found id: ""
	I1202 21:19:03.981219  313474 logs.go:282] 0 containers: []
	W1202 21:19:03.981233  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:03.981239  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:03.981301  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:04.008306  313474 cri.go:89] found id: ""
	I1202 21:19:04.008322  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.008329  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:04.008335  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:04.008401  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:04.033825  313474 cri.go:89] found id: ""
	I1202 21:19:04.033839  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.033847  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:04.033853  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:04.033912  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:04.062862  313474 cri.go:89] found id: ""
	I1202 21:19:04.062876  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.062883  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:04.062890  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:04.062957  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:04.098358  313474 cri.go:89] found id: ""
	I1202 21:19:04.098372  313474 logs.go:282] 0 containers: []
	W1202 21:19:04.098379  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:04.098388  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:04.098398  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:04.160856  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:04.160874  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:04.176607  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:04.176625  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:04.239202  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:04.231372   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.231808   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233616   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.233967   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:04.235435   16639 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:04.239213  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:04.239224  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:04.304570  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:04.304588  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:06.834974  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:06.846425  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:06.846496  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:06.874499  313474 cri.go:89] found id: ""
	I1202 21:19:06.874513  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.874520  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:06.874526  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:06.874585  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:06.899405  313474 cri.go:89] found id: ""
	I1202 21:19:06.899419  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.899426  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:06.899432  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:06.899490  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:06.927927  313474 cri.go:89] found id: ""
	I1202 21:19:06.927940  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.927947  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:06.927953  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:06.928017  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:06.956416  313474 cri.go:89] found id: ""
	I1202 21:19:06.956430  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.956437  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:06.956443  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:06.956503  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:06.982016  313474 cri.go:89] found id: ""
	I1202 21:19:06.982030  313474 logs.go:282] 0 containers: []
	W1202 21:19:06.982038  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:06.982043  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:06.982102  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:07.008744  313474 cri.go:89] found id: ""
	I1202 21:19:07.008758  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.008765  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:07.008771  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:07.008831  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:07.051903  313474 cri.go:89] found id: ""
	I1202 21:19:07.051917  313474 logs.go:282] 0 containers: []
	W1202 21:19:07.051924  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:07.051933  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:07.051956  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:07.111866  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:07.111885  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:07.131193  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:07.131212  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:07.197137  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:07.189103   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.189535   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191127   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.191787   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:07.193245   16738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:07.197148  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:07.197159  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:07.258783  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:07.258802  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:09.784238  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:09.795790  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:09.795850  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:09.821880  313474 cri.go:89] found id: ""
	I1202 21:19:09.821894  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.821902  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:09.821907  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:09.821970  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:09.845564  313474 cri.go:89] found id: ""
	I1202 21:19:09.845579  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.845586  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:09.845617  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:09.845698  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:09.874848  313474 cri.go:89] found id: ""
	I1202 21:19:09.874862  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.874875  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:09.874880  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:09.874939  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:09.899396  313474 cri.go:89] found id: ""
	I1202 21:19:09.899410  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.899417  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:09.899423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:09.899485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:09.928207  313474 cri.go:89] found id: ""
	I1202 21:19:09.928231  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.928291  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:09.928297  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:09.928367  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:09.953363  313474 cri.go:89] found id: ""
	I1202 21:19:09.953386  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.953393  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:09.953400  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:09.953478  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:09.977852  313474 cri.go:89] found id: ""
	I1202 21:19:09.977866  313474 logs.go:282] 0 containers: []
	W1202 21:19:09.977873  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:09.977881  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:09.977891  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:10.035535  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:10.035554  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:10.053223  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:10.053240  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:10.129538  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:10.121217   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.122156   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.123949   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.124266   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:10.125909   16842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:10.129549  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:10.129561  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:10.196069  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:10.196089  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:12.729098  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:12.739162  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:12.739221  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:12.762279  313474 cri.go:89] found id: ""
	I1202 21:19:12.762293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.762300  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:12.762305  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:12.762405  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:12.787279  313474 cri.go:89] found id: ""
	I1202 21:19:12.787293  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.787300  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:12.787306  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:12.787364  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:12.812545  313474 cri.go:89] found id: ""
	I1202 21:19:12.812558  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.812566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:12.812571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:12.812642  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:12.840741  313474 cri.go:89] found id: ""
	I1202 21:19:12.840755  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.840762  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:12.840767  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:12.840824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:12.868898  313474 cri.go:89] found id: ""
	I1202 21:19:12.868912  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.868919  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:12.868924  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:12.868983  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:12.895296  313474 cri.go:89] found id: ""
	I1202 21:19:12.895310  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.895317  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:12.895322  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:12.895382  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:12.918838  313474 cri.go:89] found id: ""
	I1202 21:19:12.918852  313474 logs.go:282] 0 containers: []
	W1202 21:19:12.918859  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:12.918867  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:12.918880  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:12.989410  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:12.989434  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:13.018849  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:13.018864  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:13.075957  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:13.075976  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:13.095483  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:13.095501  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:13.160629  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:13.153520   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.154016   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155471   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.155775   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:13.157071   16959 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.660888  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:15.670559  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:15.670624  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:15.693948  313474 cri.go:89] found id: ""
	I1202 21:19:15.693961  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.693969  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:15.693974  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:15.694041  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:15.720374  313474 cri.go:89] found id: ""
	I1202 21:19:15.720389  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.720396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:15.720401  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:15.720460  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:15.745246  313474 cri.go:89] found id: ""
	I1202 21:19:15.745259  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.745267  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:15.745272  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:15.745339  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:15.772221  313474 cri.go:89] found id: ""
	I1202 21:19:15.772234  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.772241  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:15.772247  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:15.772317  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:15.795604  313474 cri.go:89] found id: ""
	I1202 21:19:15.795618  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.795624  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:15.795630  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:15.795687  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:15.824167  313474 cri.go:89] found id: ""
	I1202 21:19:15.824180  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.824187  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:15.824193  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:15.824252  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:15.847367  313474 cri.go:89] found id: ""
	I1202 21:19:15.847380  313474 logs.go:282] 0 containers: []
	W1202 21:19:15.847387  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:15.847396  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:15.847406  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:15.901801  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:15.901820  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:15.917208  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:15.917228  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:15.976565  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:15.969576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.970085   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971162   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.971576   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:15.973029   17043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:15.976575  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:15.976586  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:16.041174  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:16.041192  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.580269  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:18.590169  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:18.590245  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:18.615027  313474 cri.go:89] found id: ""
	I1202 21:19:18.615042  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.615049  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:18.615055  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:18.615135  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:18.640491  313474 cri.go:89] found id: ""
	I1202 21:19:18.640505  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.640512  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:18.640517  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:18.640584  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:18.665078  313474 cri.go:89] found id: ""
	I1202 21:19:18.665092  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.665099  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:18.665105  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:18.665162  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:18.689844  313474 cri.go:89] found id: ""
	I1202 21:19:18.689858  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.689865  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:18.689871  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:18.689928  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:18.715165  313474 cri.go:89] found id: ""
	I1202 21:19:18.715179  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.715186  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:18.715191  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:18.715250  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:18.740098  313474 cri.go:89] found id: ""
	I1202 21:19:18.740111  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.740118  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:18.740124  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:18.740181  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:18.764406  313474 cri.go:89] found id: ""
	I1202 21:19:18.764420  313474 logs.go:282] 0 containers: []
	W1202 21:19:18.764427  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:18.764435  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:18.764448  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:18.795780  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:18.795801  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:18.851180  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:18.851199  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:18.867072  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:18.867088  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:18.932904  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:18.925224   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.926353   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.927456   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.928040   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:18.929537   17159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:18.932917  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:18.932930  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.499766  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:21.511750  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:21.511824  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:21.541598  313474 cri.go:89] found id: ""
	I1202 21:19:21.541612  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.541619  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:21.541624  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:21.541710  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:21.565690  313474 cri.go:89] found id: ""
	I1202 21:19:21.565705  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.565712  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:21.565717  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:21.565786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:21.588975  313474 cri.go:89] found id: ""
	I1202 21:19:21.588989  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.588996  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:21.589002  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:21.589060  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:21.616075  313474 cri.go:89] found id: ""
	I1202 21:19:21.616100  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.616108  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:21.616114  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:21.616189  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:21.640380  313474 cri.go:89] found id: ""
	I1202 21:19:21.640393  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.640410  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:21.640416  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:21.640473  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:21.664881  313474 cri.go:89] found id: ""
	I1202 21:19:21.664895  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.664912  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:21.664919  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:21.664976  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:21.688940  313474 cri.go:89] found id: ""
	I1202 21:19:21.688961  313474 logs.go:282] 0 containers: []
	W1202 21:19:21.688968  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:21.688976  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:21.688986  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:21.747031  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:21.747050  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:21.762969  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:21.762988  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:21.829106  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:21.821852   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.822216   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.823843   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.824190   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:21.825723   17252 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:21.829117  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:21.829142  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:21.890717  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:21.890735  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:24.418721  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:24.428805  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:24.428867  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:24.454807  313474 cri.go:89] found id: ""
	I1202 21:19:24.454820  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.454827  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:24.454844  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:24.454905  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:24.479376  313474 cri.go:89] found id: ""
	I1202 21:19:24.479390  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.479396  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:24.479402  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:24.479459  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:24.504161  313474 cri.go:89] found id: ""
	I1202 21:19:24.504174  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.504181  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:24.504195  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:24.504257  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:24.529438  313474 cri.go:89] found id: ""
	I1202 21:19:24.529452  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.529460  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:24.529466  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:24.529540  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:24.554237  313474 cri.go:89] found id: ""
	I1202 21:19:24.554251  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.554258  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:24.554264  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:24.554322  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:24.583978  313474 cri.go:89] found id: ""
	I1202 21:19:24.583992  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.583999  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:24.584005  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:24.584071  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:24.608672  313474 cri.go:89] found id: ""
	I1202 21:19:24.608686  313474 logs.go:282] 0 containers: []
	W1202 21:19:24.608694  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:24.608702  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:24.608711  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:24.663382  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:24.663399  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:24.678935  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:24.678953  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:24.741560  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:24.733345   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.733924   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.735511   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.736192   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:24.737811   17355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:24.741571  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:24.741584  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:24.805991  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:24.806014  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:27.332486  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:27.343923  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:27.343980  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:27.370846  313474 cri.go:89] found id: ""
	I1202 21:19:27.370862  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.370869  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:27.370874  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:27.370933  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:27.394765  313474 cri.go:89] found id: ""
	I1202 21:19:27.394779  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.394786  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:27.394791  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:27.394858  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:27.418228  313474 cri.go:89] found id: ""
	I1202 21:19:27.418241  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.418248  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:27.418254  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:27.418312  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:27.442428  313474 cri.go:89] found id: ""
	I1202 21:19:27.442441  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.442448  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:27.442454  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:27.442516  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:27.467409  313474 cri.go:89] found id: ""
	I1202 21:19:27.467423  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.467430  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:27.467435  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:27.467492  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:27.490186  313474 cri.go:89] found id: ""
	I1202 21:19:27.490200  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.490207  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:27.490213  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:27.490270  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:27.515032  313474 cri.go:89] found id: ""
	I1202 21:19:27.515046  313474 logs.go:282] 0 containers: []
	W1202 21:19:27.515054  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:27.515062  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:27.515072  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:27.570118  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:27.570137  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:27.585958  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:27.585974  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:27.649259  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:27.641242   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.641812   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.643494   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.644027   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:27.645611   17460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:27.649269  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:27.649288  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:27.711120  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:27.711140  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:30.243770  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:30.255318  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:30.255385  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:30.279952  313474 cri.go:89] found id: ""
	I1202 21:19:30.279966  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.279974  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:30.279979  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:30.280039  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:30.320036  313474 cri.go:89] found id: ""
	I1202 21:19:30.320049  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.320056  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:30.320061  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:30.320119  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:30.351365  313474 cri.go:89] found id: ""
	I1202 21:19:30.351378  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.351385  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:30.351391  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:30.351449  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:30.378208  313474 cri.go:89] found id: ""
	I1202 21:19:30.378221  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.378228  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:30.378234  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:30.378293  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:30.404248  313474 cri.go:89] found id: ""
	I1202 21:19:30.404262  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.404268  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:30.404274  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:30.404331  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:30.428678  313474 cri.go:89] found id: ""
	I1202 21:19:30.428691  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.428698  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:30.428714  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:30.428786  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:30.452008  313474 cri.go:89] found id: ""
	I1202 21:19:30.452021  313474 logs.go:282] 0 containers: []
	W1202 21:19:30.452039  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:30.452047  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:30.452057  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:30.506509  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:30.506530  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:30.522444  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:30.522464  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:30.585091  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:30.576660   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.577294   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579170   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.579871   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:30.581501   17565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:30.585102  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:30.585112  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:30.649461  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:30.649484  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:33.184340  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:33.195406  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:19:33.195468  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:19:33.220999  313474 cri.go:89] found id: ""
	I1202 21:19:33.221013  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.221020  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:19:33.221026  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:19:33.221087  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:19:33.245046  313474 cri.go:89] found id: ""
	I1202 21:19:33.245060  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.245068  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:19:33.245073  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:19:33.245134  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:19:33.268397  313474 cri.go:89] found id: ""
	I1202 21:19:33.268410  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.268417  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:19:33.268423  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:19:33.268485  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:19:33.304556  313474 cri.go:89] found id: ""
	I1202 21:19:33.304569  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.304577  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:19:33.304582  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:19:33.304643  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:19:33.335992  313474 cri.go:89] found id: ""
	I1202 21:19:33.336006  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.336013  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:19:33.336019  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:19:33.336086  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:19:33.367967  313474 cri.go:89] found id: ""
	I1202 21:19:33.367980  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.367989  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:19:33.367995  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:19:33.368052  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:19:33.393839  313474 cri.go:89] found id: ""
	I1202 21:19:33.393853  313474 logs.go:282] 0 containers: []
	W1202 21:19:33.393860  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:19:33.393867  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:19:33.393877  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:19:33.448875  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:19:33.448894  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:19:33.464807  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:19:33.464822  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:19:33.531228  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:19:33.523917   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.524445   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.525987   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.526306   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:19:33.527749   17673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:19:33.531238  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:19:33.531248  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:19:33.592933  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:19:33.592951  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 21:19:36.121943  313474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:19:36.132447  313474 kubeadm.go:602] duration metric: took 4m4.151661323s to restartPrimaryControlPlane
	W1202 21:19:36.132510  313474 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 21:19:36.132588  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:19:36.539188  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:19:36.552660  313474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 21:19:36.560203  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:19:36.560257  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:19:36.567605  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:19:36.567615  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:19:36.567669  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:19:36.575238  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:19:36.575292  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:19:36.582200  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:19:36.589483  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:19:36.589539  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:19:36.596652  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.604117  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:19:36.604180  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:19:36.611312  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:19:36.619074  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:19:36.619140  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:19:36.626580  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:19:36.665764  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:19:36.665850  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:19:36.739165  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:19:36.739244  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:19:36.739289  313474 kubeadm.go:319] OS: Linux
	I1202 21:19:36.739345  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:19:36.739401  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:19:36.739460  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:19:36.739515  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:19:36.739574  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:19:36.739631  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:19:36.739681  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:19:36.739743  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:19:36.739800  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:19:36.802641  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:19:36.802776  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:19:36.802889  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:19:36.810139  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:19:36.815519  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:19:36.815612  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:19:36.815684  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:19:36.815766  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:19:36.815832  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:19:36.815906  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:19:36.815965  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:19:36.816035  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:19:36.816096  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:19:36.816180  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:19:36.816258  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:19:36.816301  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:19:36.816363  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:19:36.979466  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:19:37.030688  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:19:37.178864  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:19:37.287458  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:19:37.759486  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:19:37.759977  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:19:37.764136  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:19:37.767507  313474 out.go:252]   - Booting up control plane ...
	I1202 21:19:37.767615  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:19:37.767697  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:19:37.768187  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:19:37.789119  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:19:37.789389  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:19:37.796801  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:19:37.797075  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:19:37.797116  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:19:37.935526  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:19:37.935655  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:23:37.935181  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000055683s
	I1202 21:23:37.935206  313474 kubeadm.go:319] 
	I1202 21:23:37.935262  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:23:37.935294  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:23:37.935397  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:23:37.935402  313474 kubeadm.go:319] 
	I1202 21:23:37.935505  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:23:37.935535  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:23:37.935565  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:23:37.935567  313474 kubeadm.go:319] 
	I1202 21:23:37.939509  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:23:37.940015  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:23:37.940174  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:23:37.940488  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 21:23:37.940494  313474 kubeadm.go:319] 
	I1202 21:23:37.940592  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 21:23:37.940735  313474 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000055683s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 21:23:37.940819  313474 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 21:23:38.352160  313474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:23:38.364903  313474 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 21:23:38.364957  313474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 21:23:38.373626  313474 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 21:23:38.373635  313474 kubeadm.go:158] found existing configuration files:
	
	I1202 21:23:38.373703  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1202 21:23:38.380912  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 21:23:38.380966  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 21:23:38.387986  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1202 21:23:38.395511  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 21:23:38.395567  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 21:23:38.403067  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.410435  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 21:23:38.410491  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 21:23:38.417648  313474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1202 21:23:38.425411  313474 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 21:23:38.425466  313474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 21:23:38.432690  313474 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 21:23:38.469901  313474 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 21:23:38.470170  313474 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 21:23:38.543545  313474 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 21:23:38.543611  313474 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 21:23:38.543646  313474 kubeadm.go:319] OS: Linux
	I1202 21:23:38.543689  313474 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 21:23:38.543736  313474 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 21:23:38.543782  313474 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 21:23:38.543829  313474 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 21:23:38.543876  313474 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 21:23:38.543922  313474 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 21:23:38.543966  313474 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 21:23:38.544013  313474 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 21:23:38.544058  313474 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 21:23:38.612266  313474 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 21:23:38.612377  313474 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 21:23:38.612479  313474 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 21:23:38.617939  313474 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 21:23:38.623176  313474 out.go:252]   - Generating certificates and keys ...
	I1202 21:23:38.623272  313474 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 21:23:38.623347  313474 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 21:23:38.623429  313474 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 21:23:38.623494  313474 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 21:23:38.623569  313474 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 21:23:38.623628  313474 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 21:23:38.623699  313474 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 21:23:38.623765  313474 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 21:23:38.623849  313474 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 21:23:38.623933  313474 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 21:23:38.623979  313474 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 21:23:38.624034  313474 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 21:23:39.195644  313474 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 21:23:40.418759  313474 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 21:23:40.662567  313474 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 21:23:41.331428  313474 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 21:23:41.582387  313474 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 21:23:41.582932  313474 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 21:23:41.585414  313474 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 21:23:41.588388  313474 out.go:252]   - Booting up control plane ...
	I1202 21:23:41.588487  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 21:23:41.588564  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 21:23:41.588629  313474 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 21:23:41.609723  313474 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 21:23:41.609836  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 21:23:41.617428  313474 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 21:23:41.617997  313474 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 21:23:41.618040  313474 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 21:23:41.754122  313474 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 21:23:41.754238  313474 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 21:27:41.753164  313474 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001114938s
	I1202 21:27:41.753189  313474 kubeadm.go:319] 
	I1202 21:27:41.753242  313474 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 21:27:41.753272  313474 kubeadm.go:319] 	- The kubelet is not running
	I1202 21:27:41.753369  313474 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 21:27:41.753373  313474 kubeadm.go:319] 
	I1202 21:27:41.753470  313474 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 21:27:41.753499  313474 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 21:27:41.753527  313474 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 21:27:41.753530  313474 kubeadm.go:319] 
	I1202 21:27:41.757163  313474 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 21:27:41.757586  313474 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 21:27:41.757709  313474 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 21:27:41.757943  313474 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 21:27:41.757948  313474 kubeadm.go:319] 
	I1202 21:27:41.758016  313474 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 21:27:41.758065  313474 kubeadm.go:403] duration metric: took 12m9.810714629s to StartCluster
	I1202 21:27:41.758097  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 21:27:41.758157  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 21:27:41.783479  313474 cri.go:89] found id: ""
	I1202 21:27:41.783492  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.783500  313474 logs.go:284] No container was found matching "kube-apiserver"
	I1202 21:27:41.783505  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 21:27:41.783577  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 21:27:41.814610  313474 cri.go:89] found id: ""
	I1202 21:27:41.814624  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.814631  313474 logs.go:284] No container was found matching "etcd"
	I1202 21:27:41.814644  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 21:27:41.814702  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 21:27:41.844545  313474 cri.go:89] found id: ""
	I1202 21:27:41.844559  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.844566  313474 logs.go:284] No container was found matching "coredns"
	I1202 21:27:41.844571  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 21:27:41.844630  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 21:27:41.876235  313474 cri.go:89] found id: ""
	I1202 21:27:41.876250  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.876257  313474 logs.go:284] No container was found matching "kube-scheduler"
	I1202 21:27:41.876262  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 21:27:41.876320  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 21:27:41.899944  313474 cri.go:89] found id: ""
	I1202 21:27:41.899957  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.899964  313474 logs.go:284] No container was found matching "kube-proxy"
	I1202 21:27:41.899969  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 21:27:41.900027  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 21:27:41.924640  313474 cri.go:89] found id: ""
	I1202 21:27:41.924653  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.924660  313474 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 21:27:41.924666  313474 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 21:27:41.924723  313474 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 21:27:41.951344  313474 cri.go:89] found id: ""
	I1202 21:27:41.951358  313474 logs.go:282] 0 containers: []
	W1202 21:27:41.951365  313474 logs.go:284] No container was found matching "kindnet"
	I1202 21:27:41.951373  313474 logs.go:123] Gathering logs for kubelet ...
	I1202 21:27:41.951383  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 21:27:42.009004  313474 logs.go:123] Gathering logs for dmesg ...
	I1202 21:27:42.009028  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 21:27:42.033968  313474 logs.go:123] Gathering logs for describe nodes ...
	I1202 21:27:42.033989  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 21:27:42.114849  313474 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 21:27:42.103925   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.104852   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.106932   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.108645   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:27:42.109525   21468 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 21:27:42.114863  313474 logs.go:123] Gathering logs for containerd ...
	I1202 21:27:42.114875  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 21:27:42.193571  313474 logs.go:123] Gathering logs for container status ...
	I1202 21:27:42.193593  313474 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 21:27:42.259231  313474 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 21:27:42.259270  313474 out.go:285] * 
	W1202 21:27:42.259601  313474 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.259616  313474 out.go:285] * 
	W1202 21:27:42.262291  313474 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 21:27:42.269405  313474 out.go:203] 
	W1202 21:27:42.272139  313474 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001114938s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 21:27:42.272287  313474 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 21:27:42.272371  313474 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 21:27:42.276351  313474 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609401010Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609411414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609426076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609435913Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609447105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609462194Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609496941Z" level=info msg="runtime interface created"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609503784Z" level=info msg="created NRI interface"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609513794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609548107Z" level=info msg="Connect containerd service"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.609923390Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.610459300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.628985739Z" level=info msg="Start subscribing containerd event"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.629232566Z" level=info msg="Start recovering state"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630271509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.630432538Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655348240Z" level=info msg="Start event monitor"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655522692Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655586969Z" level=info msg="Start streaming server"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655657638Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655717968Z" level=info msg="runtime interface starting up..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655774631Z" level=info msg="starting plugins..."
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.655837464Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 21:15:30 functional-753958 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 21:15:30 functional-753958 containerd[10276]: time="2025-12-02T21:15:30.657496581Z" level=info msg="containerd successfully booted in 0.074787s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:29:44.264756   23019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:44.265486   23019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:44.267312   23019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:44.267997   23019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:29:44.269646   23019 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:29:44 up  3:12,  0 user,  load average: 0.10, 0.16, 0.46
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:29:41 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:41 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 02 21:29:41 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:41 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:41 functional-753958 kubelet[22904]: E1202 21:29:41.832974   22904 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:41 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:41 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:42 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 02 21:29:42 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:42 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:42 functional-753958 kubelet[22910]: E1202 21:29:42.603739   22910 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:42 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:42 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:43 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 02 21:29:43 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:43 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:43 functional-753958 kubelet[22923]: E1202 21:29:43.343307   22923 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:43 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:43 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:29:44 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 02 21:29:44 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:44 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:29:44 functional-753958 kubelet[22977]: E1202 21:29:44.108928   22977 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:29:44 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:29:44 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (397.608399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:28:00.507782  263241 retry.go:31] will retry after 1.528600905s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:28:12.037471  263241 retry.go:31] will retry after 4.242519351s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:28:26.281818  263241 retry.go:31] will retry after 8.449705254s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:28:44.732645  263241 retry.go:31] will retry after 5.775498108s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:29:00.509299  263241 retry.go:31] will retry after 8.691458706s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1202 21:29:19.202066  263241 retry.go:31] will retry after 13.32455598s: Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1202 21:29:44.123459  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1202 21:30:39.584412  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (308.189013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (310.2356ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image save kicbase/echo-server:functional-753958 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image rm kicbase/echo-server:functional-753958 --alsologtostderr                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image save --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /etc/ssl/certs/263241.pem                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /usr/share/ca-certificates/263241.pem                                                                                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /etc/ssl/certs/2632412.pem                                                                                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /usr/share/ca-certificates/2632412.pem                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh sudo cat /etc/test/nested/copy/263241/hosts                                                                                               │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls --format short --alsologtostderr                                                                                                     │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls --format yaml --alsologtostderr                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh            │ functional-753958 ssh pgrep buildkitd                                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │                     │
	│ image          │ functional-753958 image build -t localhost/my-image:functional-753958 testdata/build --alsologtostderr                                                          │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls --format json --alsologtostderr                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image          │ functional-753958 image ls --format table --alsologtostderr                                                                                                     │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ update-context │ functional-753958 update-context --alsologtostderr -v=2                                                                                                         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ update-context │ functional-753958 update-context --alsologtostderr -v=2                                                                                                         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ update-context │ functional-753958 update-context --alsologtostderr -v=2                                                                                                         │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:29:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:29:59.074504  330745 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:59.074728  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.074756  330745 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:59.074776  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.075071  330745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:29:59.075484  330745 out.go:368] Setting JSON to false
	I1202 21:29:59.076394  330745 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11537,"bootTime":1764699462,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:29:59.076493  330745 start.go:143] virtualization:  
	I1202 21:29:59.083017  330745 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:29:59.086254  330745 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:59.086368  330745 notify.go:221] Checking for updates...
	I1202 21:29:59.091916  330745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:59.094811  330745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:29:59.098258  330745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:29:59.101319  330745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:59.104119  330745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:59.107443  330745 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:59.108020  330745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:59.133893  330745 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:59.133996  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.195113  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.186387815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.195217  330745 docker.go:319] overlay module found
	I1202 21:29:59.198460  330745 out.go:179] * Using the docker driver based on existing profile
	I1202 21:29:59.201271  330745 start.go:309] selected driver: docker
	I1202 21:29:59.201292  330745 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.201401  330745 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:59.201516  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.254323  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.245065143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.254742  330745 cni.go:84] Creating CNI manager for ""
	I1202 21:29:59.254813  330745 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:29:59.254857  330745 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.257859  330745 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.866096159Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.866927889Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.948662158Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.951419149Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.953764370Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.962627697Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\" returns successfully"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.208495538Z" level=info msg="No images store for sha256:3bf94ca9241bb53bb3c5f46549ba7cf70917bdd4116f398ab0b3c6bb8c2ad3b7"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.210698937Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.217581480Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.218271897Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.021583979Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.023960100Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.025942188Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.036558777Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\" returns successfully"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.694088674Z" level=info msg="No images store for sha256:63dfda1d448b134ec8f94d6353c602ad85478de6d2c7b6aaf7ed8ac2e8efc7a0"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.696268410Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.703398544Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.704547007Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:15 functional-753958 containerd[10276]: time="2025-12-02T21:30:15.752018021Z" level=info msg="connecting to shim xo2bvnc49vg0970gtdfjqxh7c" address="unix:///run/containerd/s/295153c9dffc6e48c6a98e1c1804c4d21575d90f3badb48fc012d9d720314a4a" namespace=k8s.io protocol=ttrpc version=3
	Dec 02 21:30:15 functional-753958 containerd[10276]: time="2025-12-02T21:30:15.850721553Z" level=info msg="shim disconnected" id=xo2bvnc49vg0970gtdfjqxh7c namespace=k8s.io
	Dec 02 21:30:15 functional-753958 containerd[10276]: time="2025-12-02T21:30:15.851632781Z" level=info msg="cleaning up after shim disconnected" id=xo2bvnc49vg0970gtdfjqxh7c namespace=k8s.io
	Dec 02 21:30:15 functional-753958 containerd[10276]: time="2025-12-02T21:30:15.851751095Z" level=info msg="cleaning up dead shim" id=xo2bvnc49vg0970gtdfjqxh7c namespace=k8s.io
	Dec 02 21:30:16 functional-753958 containerd[10276]: time="2025-12-02T21:30:16.148652736Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-753958\""
	Dec 02 21:30:16 functional-753958 containerd[10276]: time="2025-12-02T21:30:16.156086672Z" level=info msg="ImageCreate event name:\"sha256:e669c45f0a6841049e19860eeee9ceafc8f2b35f32efb23b41ead66da5c03690\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:16 functional-753958 containerd[10276]: time="2025-12-02T21:30:16.156463268Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:31:52.098558   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:31:52.099448   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:31:52.101291   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:31:52.101724   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:31:52.103335   25667 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:31:52 up  3:14,  0 user,  load average: 0.43, 0.39, 0.51
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:31:48 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:31:49 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 650.
	Dec 02 21:31:49 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:49 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:49 functional-753958 kubelet[25537]: E1202 21:31:49.334656   25537 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:31:49 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:31:49 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 651.
	Dec 02 21:31:50 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:50 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:50 functional-753958 kubelet[25543]: E1202 21:31:50.088241   25543 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 652.
	Dec 02 21:31:50 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:50 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:50 functional-753958 kubelet[25549]: E1202 21:31:50.835138   25549 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:31:50 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:31:51 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 653.
	Dec 02 21:31:51 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:51 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:31:51 functional-753958 kubelet[25585]: E1202 21:31:51.602269   25585 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:31:51 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:31:51 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (318.389631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-753958 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-753958 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (64.456649ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-753958 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	        "Created": "2025-12-02T21:00:39.470229988Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301734,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T21:00:39.535019201Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
	        "HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
	        "LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
	        "Name": "/functional-753958",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753958:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-753958",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
	                "LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753958",
	                "Source": "/var/lib/docker/volumes/functional-753958/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753958",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753958",
	                "name.minikube.sigs.k8s.io": "functional-753958",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
	            "SandboxKey": "/var/run/docker/netns/44df82336b15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753958": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:7f:7f:d7:c5:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
	                    "EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753958",
	                        "321ef4a88b51"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 2 (303.081458ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount2 --alsologtostderr -v=1                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount1                                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount     │ -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount3 --alsologtostderr -v=1                            │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ ssh       │ functional-753958 ssh findmnt -T /mount2                                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ ssh       │ functional-753958 ssh findmnt -T /mount3                                                                                                                        │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │ 02 Dec 25 21:29 UTC │
	│ mount     │ -p functional-753958 --kill=true                                                                                                                                │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ start     │ -p functional-753958 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-753958 --alsologtostderr -v=1                                                                                                  │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:29 UTC │                     │
	│ license   │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ ssh       │ functional-753958 ssh sudo systemctl is-active docker                                                                                                           │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │                     │
	│ ssh       │ functional-753958 ssh sudo systemctl is-active crio                                                                                                             │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │                     │
	│ image     │ functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image save kicbase/echo-server:functional-753958 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image rm kicbase/echo-server:functional-753958 --alsologtostderr                                                                              │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image ls                                                                                                                                      │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	│ image     │ functional-753958 image save --daemon kicbase/echo-server:functional-753958 --alsologtostderr                                                                   │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:30 UTC │ 02 Dec 25 21:30 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 21:29:59
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 21:29:59.074504  330745 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:59.074728  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.074756  330745 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:59.074776  330745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:59.075071  330745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:29:59.075484  330745 out.go:368] Setting JSON to false
	I1202 21:29:59.076394  330745 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11537,"bootTime":1764699462,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:29:59.076493  330745 start.go:143] virtualization:  
	I1202 21:29:59.083017  330745 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:29:59.086254  330745 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:59.086368  330745 notify.go:221] Checking for updates...
	I1202 21:29:59.091916  330745 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:59.094811  330745 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:29:59.098258  330745 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:29:59.101319  330745 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:59.104119  330745 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:59.107443  330745 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:59.108020  330745 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:59.133893  330745 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:59.133996  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.195113  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.186387815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.195217  330745 docker.go:319] overlay module found
	I1202 21:29:59.198460  330745 out.go:179] * Using the docker driver based on existing profile
	I1202 21:29:59.201271  330745 start.go:309] selected driver: docker
	I1202 21:29:59.201292  330745 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.201401  330745 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:59.201516  330745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:59.254323  330745 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:59.245065143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:59.254742  330745 cni.go:84] Creating CNI manager for ""
	I1202 21:29:59.254813  330745 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 21:29:59.254857  330745 start.go:353] cluster config:
	{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:59.257859  330745 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 21:30:03 functional-753958 containerd[10276]: time="2025-12-02T21:30:03.791478232Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.612757740Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\""
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.615418069Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.617703091Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.626621669Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\" returns successfully"
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.856032457Z" level=info msg="No images store for sha256:3bf94ca9241bb53bb3c5f46549ba7cf70917bdd4116f398ab0b3c6bb8c2ad3b7"
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.858725639Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.866096159Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:04 functional-753958 containerd[10276]: time="2025-12-02T21:30:04.866927889Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.948662158Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.951419149Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.953764370Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 02 21:30:05 functional-753958 containerd[10276]: time="2025-12-02T21:30:05.962627697Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\" returns successfully"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.208495538Z" level=info msg="No images store for sha256:3bf94ca9241bb53bb3c5f46549ba7cf70917bdd4116f398ab0b3c6bb8c2ad3b7"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.210698937Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.217581480Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:06 functional-753958 containerd[10276]: time="2025-12-02T21:30:06.218271897Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.021583979Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.023960100Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.025942188Z" level=info msg="ImageDelete event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.036558777Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-753958\" returns successfully"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.694088674Z" level=info msg="No images store for sha256:63dfda1d448b134ec8f94d6353c602ad85478de6d2c7b6aaf7ed8ac2e8efc7a0"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.696268410Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-753958\""
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.703398544Z" level=info msg="ImageCreate event name:\"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 21:30:07 functional-753958 containerd[10276]: time="2025-12-02T21:30:07.704547007Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-753958\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 21:30:09.343434   24415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:09.344189   24415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:09.345920   24415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:09.346632   24415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1202 21:30:09.348335   24415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 21:30:09 up  3:12,  0 user,  load average: 1.07, 0.39, 0.53
	Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 21:30:05 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:30:06 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 513.
	Dec 02 21:30:06 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:06 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:06 functional-753958 kubelet[24162]: E1202 21:30:06.601183   24162 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:30:06 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:30:06 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:30:07 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 514.
	Dec 02 21:30:07 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:07 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:07 functional-753958 kubelet[24232]: E1202 21:30:07.347947   24232 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:30:07 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:30:07 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 515.
	Dec 02 21:30:08 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:08 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:08 functional-753958 kubelet[24280]: E1202 21:30:08.115871   24280 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 516.
	Dec 02 21:30:08 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:08 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 21:30:08 functional-753958 kubelet[24334]: E1202 21:30:08.858721   24334 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 21:30:08 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 2 (341.432444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1202 21:27:49.924163  326479 out.go:360] Setting OutFile to fd 1 ...
I1202 21:27:49.924443  326479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:27:49.924472  326479 out.go:374] Setting ErrFile to fd 2...
I1202 21:27:49.924492  326479 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:27:49.924822  326479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:27:49.925155  326479 mustload.go:66] Loading cluster: functional-753958
I1202 21:27:49.925734  326479 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:27:49.926328  326479 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:27:49.945565  326479 host.go:66] Checking if "functional-753958" exists ...
I1202 21:27:49.945944  326479 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:27:50.108034  326479 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:27:50.084309869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:27:50.108183  326479 api_server.go:166] Checking apiserver status ...
I1202 21:27:50.108248  326479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 21:27:50.108288  326479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:27:50.155850  326479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
W1202 21:27:50.283824  326479 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1202 21:27:50.287077  326479 out.go:179] * The control-plane node functional-753958 apiserver is not running: (state=Stopped)
I1202 21:27:50.289968  326479 out.go:179]   To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
stdout: * The control-plane node functional-753958 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753958"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 326478: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-753958 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-753958 apply -f testdata/testsvc.yaml: exit status 1 (93.400874ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-753958 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (112.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.111.73.211": Temporary Error: Get "http://10.111.73.211": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-753958 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-753958 get svc nginx-svc: exit status 1 (84.551675ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-753958 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (112.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-753958 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-753958 create deployment hello-node --image kicbase/echo-server: exit status 1 (100.963027ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-753958 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 service list: exit status 103 (273.478358ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753958 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-753958 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-753958 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753958\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 service list -o json: exit status 103 (258.104983ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753958 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-753958 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 service --namespace=default --https --url hello-node: exit status 103 (258.034634ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753958 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-753958 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 service hello-node --url --format={{.IP}}: exit status 103 (275.078419ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753958 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-753958 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-753958 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753958\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 service hello-node --url: exit status 103 (255.352447ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753958 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753958"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-753958 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-753958 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753958"
functional_test.go:1579: failed to parse "* The control-plane node functional-753958 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753958\"": parse "* The control-plane node functional-753958 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753958\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764710990148751834" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764710990148751834" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764710990148751834" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001/test-1764710990148751834
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.654417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:29:50.472732  263241 retry.go:31] will retry after 527.346779ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 21:29 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 21:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 21:29 test-1764710990148751834
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh cat /mount-9p/test-1764710990148751834
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-753958 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-753958 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (58.277765ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-753958 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (268.840495ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=34231)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  2 21:29 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  2 21:29 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  2 21:29 test-1764710990148751834
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-753958 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:34231
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001:/mount-9p --alsologtostderr -v=1] stderr:
I1202 21:29:50.202963  328864 out.go:360] Setting OutFile to fd 1 ...
I1202 21:29:50.203173  328864 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:50.203180  328864 out.go:374] Setting ErrFile to fd 2...
I1202 21:29:50.203185  328864 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:29:50.203450  328864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:29:50.203729  328864 mustload.go:66] Loading cluster: functional-753958
I1202 21:29:50.204092  328864 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:29:50.204608  328864 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:29:50.227499  328864 host.go:66] Checking if "functional-753958" exists ...
I1202 21:29:50.227819  328864 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:29:50.315998  328864 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:50.303690096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:29:50.316346  328864 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 21:29:50.342869  328864 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001 into VM as /mount-9p ...
I1202 21:29:50.345837  328864 out.go:179]   - Mount type:   9p
I1202 21:29:50.348688  328864 out.go:179]   - User ID:      docker
I1202 21:29:50.351628  328864 out.go:179]   - Group ID:     docker
I1202 21:29:50.354452  328864 out.go:179]   - Version:      9p2000.L
I1202 21:29:50.357570  328864 out.go:179]   - Message Size: 262144
I1202 21:29:50.360317  328864 out.go:179]   - Options:      map[]
I1202 21:29:50.368874  328864 out.go:179]   - Bind Address: 192.168.49.1:34231
I1202 21:29:50.371750  328864 out.go:179] * Userspace file server: 
I1202 21:29:50.372068  328864 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 21:29:50.372135  328864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:29:50.390922  328864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:29:50.496527  328864 mount.go:180] unmount for /mount-9p ran successfully
I1202 21:29:50.496555  328864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1202 21:29:50.504709  328864 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=34231,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1202 21:29:50.515203  328864 main.go:127] stdlog: ufs.go:141 connected
I1202 21:29:50.515381  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tversion tag 65535 msize 262144 version '9P2000.L'
I1202 21:29:50.515426  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rversion tag 65535 msize 262144 version '9P2000'
I1202 21:29:50.515638  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1202 21:29:50.515694  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rattach tag 0 aqid (3b5de0 e0f8c13f 'd')
I1202 21:29:50.516339  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 0
I1202 21:29:50.516392  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5de0 e0f8c13f 'd') m d775 at 0 mt 1764710990 l 4096 t 0 d 0 ext )
I1202 21:29:50.520237  328864 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/.mount-process: {Name:mkcae2868952617921434231cfdcdcee8e3684bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:29:50.520454  328864 mount.go:105] mount successful: ""
I1202 21:29:50.523935  328864 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1146874681/001 to /mount-9p
I1202 21:29:50.526965  328864 out.go:203] 
I1202 21:29:50.529761  328864 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1202 21:29:51.530206  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 0
I1202 21:29:51.530282  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5de0 e0f8c13f 'd') m d775 at 0 mt 1764710990 l 4096 t 0 d 0 ext )
I1202 21:29:51.530617  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 1 
I1202 21:29:51.530658  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 
I1202 21:29:51.530793  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Topen tag 0 fid 1 mode 0
I1202 21:29:51.530839  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Ropen tag 0 qid (3b5de0 e0f8c13f 'd') iounit 0
I1202 21:29:51.530964  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 0
I1202 21:29:51.530998  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5de0 e0f8c13f 'd') m d775 at 0 mt 1764710990 l 4096 t 0 d 0 ext )
I1202 21:29:51.531156  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 0 count 262120
I1202 21:29:51.531277  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 258
I1202 21:29:51.531415  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 261862
I1202 21:29:51.531448  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:51.531564  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:29:51.531589  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:51.531722  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 21:29:51.531759  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de1 e0f8c13f '') 
I1202 21:29:51.531866  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.531907  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5de1 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.532042  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.532073  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5de1 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.532206  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:51.532236  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:51.532363  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'test-1764710990148751834' 
I1202 21:29:51.532397  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de3 e0f8c13f '') 
I1202 21:29:51.532523  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.532558  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.532671  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.532702  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.532821  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:51.532846  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:51.533001  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 21:29:51.533048  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de2 e0f8c13f '') 
I1202 21:29:51.533170  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.533202  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5de2 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.533329  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:51.533359  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5de2 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.533482  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:51.533507  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:51.533630  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:29:51.533680  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:51.533830  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 1
I1202 21:29:51.533859  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:51.817845  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 1 0:'test-1764710990148751834' 
I1202 21:29:51.817925  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de3 e0f8c13f '') 
I1202 21:29:51.818085  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 1
I1202 21:29:51.818134  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.818287  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 1 newfid 2 
I1202 21:29:51.818317  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 
I1202 21:29:51.818428  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Topen tag 0 fid 2 mode 0
I1202 21:29:51.818477  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Ropen tag 0 qid (3b5de3 e0f8c13f '') iounit 0
I1202 21:29:51.818611  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 1
I1202 21:29:51.818647  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:51.818792  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 2 offset 0 count 262120
I1202 21:29:51.818835  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 24
I1202 21:29:51.818959  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 2 offset 24 count 262120
I1202 21:29:51.819016  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:51.819160  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 2 offset 24 count 262120
I1202 21:29:51.819196  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:51.819341  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:51.819400  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:51.819596  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 1
I1202 21:29:51.819624  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.148480  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 0
I1202 21:29:52.148567  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5de0 e0f8c13f 'd') m d775 at 0 mt 1764710990 l 4096 t 0 d 0 ext )
I1202 21:29:52.148927  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 1 
I1202 21:29:52.148971  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 
I1202 21:29:52.149091  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Topen tag 0 fid 1 mode 0
I1202 21:29:52.149142  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Ropen tag 0 qid (3b5de0 e0f8c13f 'd') iounit 0
I1202 21:29:52.149270  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 0
I1202 21:29:52.149302  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (3b5de0 e0f8c13f 'd') m d775 at 0 mt 1764710990 l 4096 t 0 d 0 ext )
I1202 21:29:52.149440  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 0 count 262120
I1202 21:29:52.149561  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 258
I1202 21:29:52.149716  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 261862
I1202 21:29:52.149750  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:52.149881  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:29:52.149910  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:52.150038  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1202 21:29:52.150075  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de1 e0f8c13f '') 
I1202 21:29:52.150197  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.150234  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5de1 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.150360  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.150393  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (3b5de1 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.150526  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:52.150552  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.150681  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'test-1764710990148751834' 
I1202 21:29:52.150716  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de3 e0f8c13f '') 
I1202 21:29:52.150847  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.150881  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.150999  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.151040  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('test-1764710990148751834' 'jenkins' 'jenkins' '' q (3b5de3 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.151166  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:52.151189  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.151316  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1202 21:29:52.151347  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rwalk tag 0 (3b5de2 e0f8c13f '') 
I1202 21:29:52.151466  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.151497  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5de2 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.151616  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tstat tag 0 fid 2
I1202 21:29:52.151648  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (3b5de2 e0f8c13f '') m 644 at 0 mt 1764710990 l 24 t 0 d 0 ext )
I1202 21:29:52.151784  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 2
I1202 21:29:52.151808  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.151915  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tread tag 0 fid 1 offset 258 count 262120
I1202 21:29:52.151943  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rread tag 0 count 0
I1202 21:29:52.152109  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 1
I1202 21:29:52.152162  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.153613  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1202 21:29:52.153713  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rerror tag 0 ename 'file not found' ecode 0
I1202 21:29:52.436315  328864 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:39428 Tclunk tag 0 fid 0
I1202 21:29:52.436366  328864 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:39428 Rclunk tag 0
I1202 21:29:52.436885  328864 main.go:127] stdlog: ufs.go:147 disconnected
I1202 21:29:52.457389  328864 out.go:179] * Unmounting /mount-9p ...
I1202 21:29:52.460366  328864 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1202 21:29:52.467469  328864 mount.go:180] unmount for /mount-9p ran successfully
I1202 21:29:52.467571  328864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/.mount-process: {Name:mkcae2868952617921434231cfdcdcee8e3684bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:29:52.470612  328864 out.go:203] 
W1202 21:29:52.473735  328864 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1202 21:29:52.476629  328864 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (789.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578337 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-578337 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.302714679s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-578337
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-578337: (1.56727319s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-578337 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-578337 status --format={{.Host}}: exit status 7 (117.083813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-578337 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-578337 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 109 (12m30.090154872s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-578337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-578337" primary control-plane node in "kubernetes-upgrade-578337" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:00:13.192833  459274 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:00:13.193033  459274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:00:13.193061  459274 out.go:374] Setting ErrFile to fd 2...
	I1202 22:00:13.193082  459274 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:00:13.193479  459274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:00:13.194078  459274 out.go:368] Setting JSON to false
	I1202 22:00:13.195431  459274 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13352,"bootTime":1764699462,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:00:13.195526  459274 start.go:143] virtualization:  
	I1202 22:00:13.198919  459274 out.go:179] * [kubernetes-upgrade-578337] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:00:13.202519  459274 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:00:13.202652  459274 notify.go:221] Checking for updates...
	I1202 22:00:13.207892  459274 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:00:13.210690  459274 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:00:13.213394  459274 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:00:13.216081  459274 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:00:13.218922  459274 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:00:13.222070  459274 config.go:182] Loaded profile config "kubernetes-upgrade-578337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1202 22:00:13.222658  459274 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:00:13.263395  459274 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:00:13.263531  459274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:00:13.348967  459274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-02 22:00:13.339667534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:00:13.349076  459274 docker.go:319] overlay module found
	I1202 22:00:13.352135  459274 out.go:179] * Using the docker driver based on existing profile
	I1202 22:00:13.354989  459274 start.go:309] selected driver: docker
	I1202 22:00:13.355014  459274 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-578337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-578337 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:00:13.355107  459274 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:00:13.355765  459274 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:00:13.444031  459274 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-02 22:00:13.433568059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:00:13.444355  459274 cni.go:84] Creating CNI manager for ""
	I1202 22:00:13.444421  459274 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:00:13.444461  459274 start.go:353] cluster config:
	{Name:kubernetes-upgrade-578337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-578337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:00:13.447749  459274 out.go:179] * Starting "kubernetes-upgrade-578337" primary control-plane node in "kubernetes-upgrade-578337" cluster
	I1202 22:00:13.449961  459274 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:00:13.452875  459274 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:00:13.456198  459274 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:00:13.456395  459274 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:00:13.477109  459274 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:00:13.477131  459274 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:00:13.543851  459274 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:00:13.768851  459274 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:00:13.768981  459274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/config.json ...
	I1202 22:00:13.769207  459274 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:00:13.769238  459274 start.go:360] acquireMachinesLock for kubernetes-upgrade-578337: {Name:mke01284f0e408c30c6537fa8a32af43dad735b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.769291  459274 start.go:364] duration metric: took 33.837µs to acquireMachinesLock for "kubernetes-upgrade-578337"
	I1202 22:00:13.769305  459274 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:00:13.769310  459274 fix.go:54] fixHost starting: 
	I1202 22:00:13.769572  459274 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578337 --format={{.State.Status}}
	I1202 22:00:13.769893  459274 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.769965  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:00:13.769975  459274 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.436µs
	I1202 22:00:13.769989  459274 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:00:13.770001  459274 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770047  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:00:13.770054  459274 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 54.743µs
	I1202 22:00:13.770060  459274 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:00:13.770070  459274 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770098  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:00:13.770103  459274 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 34.715µs
	I1202 22:00:13.770110  459274 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:00:13.770120  459274 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770150  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:00:13.770155  459274 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 36.15µs
	I1202 22:00:13.770162  459274 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:00:13.770171  459274 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770196  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:00:13.770201  459274 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.917µs
	I1202 22:00:13.770206  459274 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:00:13.770214  459274 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770238  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:00:13.770243  459274 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.505µs
	I1202 22:00:13.770248  459274 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:00:13.770256  459274 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770280  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:00:13.770284  459274 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.21µs
	I1202 22:00:13.770289  459274 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:00:13.770302  459274 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:00:13.770329  459274 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:00:13.770334  459274 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.697µs
	I1202 22:00:13.770340  459274 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:00:13.770347  459274 cache.go:87] Successfully saved all images to host disk.
	I1202 22:00:13.792335  459274 fix.go:112] recreateIfNeeded on kubernetes-upgrade-578337: state=Stopped err=<nil>
	W1202 22:00:13.792364  459274 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:00:13.796257  459274 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-578337" ...
	I1202 22:00:13.796329  459274 cli_runner.go:164] Run: docker start kubernetes-upgrade-578337
	I1202 22:00:14.111646  459274 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-578337 --format={{.State.Status}}
	I1202 22:00:14.142218  459274 kic.go:430] container "kubernetes-upgrade-578337" state is running.
	I1202 22:00:14.142608  459274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-578337
	I1202 22:00:14.167514  459274 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/config.json ...
	I1202 22:00:14.167756  459274 machine.go:94] provisionDockerMachine start ...
	I1202 22:00:14.167829  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:14.198006  459274 main.go:143] libmachine: Using SSH client type: native
	I1202 22:00:14.198328  459274 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1202 22:00:14.198337  459274 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:00:14.198976  459274 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49758->127.0.0.1:33333: read: connection reset by peer
	I1202 22:00:17.358762  459274 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-578337
	
	I1202 22:00:17.358834  459274 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-578337"
	I1202 22:00:17.358941  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:17.384755  459274 main.go:143] libmachine: Using SSH client type: native
	I1202 22:00:17.385076  459274 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1202 22:00:17.385086  459274 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-578337 && echo "kubernetes-upgrade-578337" | sudo tee /etc/hostname
	I1202 22:00:17.555817  459274 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-578337
	
	I1202 22:00:17.555926  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:17.579466  459274 main.go:143] libmachine: Using SSH client type: native
	I1202 22:00:17.579902  459274 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1202 22:00:17.579927  459274 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-578337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-578337/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-578337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:00:17.742454  459274 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:00:17.742507  459274 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:00:17.742551  459274 ubuntu.go:190] setting up certificates
	I1202 22:00:17.742566  459274 provision.go:84] configureAuth start
	I1202 22:00:17.742647  459274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-578337
	I1202 22:00:17.768809  459274 provision.go:143] copyHostCerts
	I1202 22:00:17.768887  459274 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:00:17.768907  459274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:00:17.768983  459274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:00:17.769078  459274 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:00:17.769090  459274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:00:17.769117  459274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:00:17.769167  459274 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:00:17.769177  459274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:00:17.769201  459274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:00:17.769250  459274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-578337 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-578337 localhost minikube]
	I1202 22:00:17.899440  459274 provision.go:177] copyRemoteCerts
	I1202 22:00:17.899512  459274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:00:17.899602  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:17.918802  459274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/kubernetes-upgrade-578337/id_rsa Username:docker}
	I1202 22:00:18.023278  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:00:18.045876  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:00:18.066929  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1202 22:00:18.088326  459274 provision.go:87] duration metric: took 345.738395ms to configureAuth
	I1202 22:00:18.088358  459274 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:00:18.088545  459274 config.go:182] Loaded profile config "kubernetes-upgrade-578337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:00:18.088558  459274 machine.go:97] duration metric: took 3.92078705s to provisionDockerMachine
	I1202 22:00:18.088568  459274 start.go:293] postStartSetup for "kubernetes-upgrade-578337" (driver="docker")
	I1202 22:00:18.088584  459274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:00:18.088650  459274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:00:18.088706  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:18.111352  459274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/kubernetes-upgrade-578337/id_rsa Username:docker}
	I1202 22:00:18.218731  459274 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:00:18.222592  459274 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:00:18.222621  459274 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:00:18.222633  459274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:00:18.222693  459274 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:00:18.222772  459274 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:00:18.222906  459274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:00:18.231092  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:00:18.251272  459274 start.go:296] duration metric: took 162.684772ms for postStartSetup
	I1202 22:00:18.251363  459274 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:00:18.251439  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:18.270378  459274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/kubernetes-upgrade-578337/id_rsa Username:docker}
	I1202 22:00:18.373228  459274 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:00:18.382253  459274 fix.go:56] duration metric: took 4.612934738s for fixHost
	I1202 22:00:18.382282  459274 start.go:83] releasing machines lock for "kubernetes-upgrade-578337", held for 4.612981769s
	I1202 22:00:18.382376  459274 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-578337
	I1202 22:00:18.403427  459274 ssh_runner.go:195] Run: cat /version.json
	I1202 22:00:18.403487  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:18.403496  459274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:00:18.403551  459274 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-578337
	I1202 22:00:18.430011  459274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/kubernetes-upgrade-578337/id_rsa Username:docker}
	I1202 22:00:18.444837  459274 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/kubernetes-upgrade-578337/id_rsa Username:docker}
	I1202 22:00:18.630905  459274 ssh_runner.go:195] Run: systemctl --version
	I1202 22:00:18.637803  459274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:00:18.643041  459274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:00:18.643131  459274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:00:18.654159  459274 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:00:18.654188  459274 start.go:496] detecting cgroup driver to use...
	I1202 22:00:18.654226  459274 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:00:18.654292  459274 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:00:18.677942  459274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:00:18.694442  459274 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:00:18.694519  459274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:00:18.716968  459274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:00:18.734118  459274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:00:18.902025  459274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:00:19.052164  459274 docker.go:234] disabling docker service ...
	I1202 22:00:19.052233  459274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:00:19.069411  459274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:00:19.084661  459274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:00:19.230053  459274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:00:19.376040  459274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:00:19.388785  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:00:19.401971  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:00:19.410951  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:00:19.419385  459274 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:00:19.419450  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:00:19.427727  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:00:19.435571  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:00:19.443357  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:00:19.451458  459274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:00:19.459351  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:00:19.467324  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:00:19.475764  459274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:00:19.484370  459274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:00:19.492011  459274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:00:19.499500  459274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:00:19.631420  459274 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:00:19.824712  459274 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:00:19.824881  459274 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:00:19.830362  459274 start.go:564] Will wait 60s for crictl version
	I1202 22:00:19.830431  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:19.835411  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:00:19.869367  459274 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:00:19.869432  459274 ssh_runner.go:195] Run: containerd --version
	I1202 22:00:19.907474  459274 ssh_runner.go:195] Run: containerd --version
	I1202 22:00:19.936910  459274 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:00:19.939835  459274 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-578337 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:00:19.963364  459274 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:00:19.967384  459274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:00:19.976451  459274 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-578337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-578337 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:00:19.976562  459274 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:00:19.976622  459274 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:00:20.001842  459274 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:00:20.001875  459274 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:00:20.001928  459274 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:00:20.002188  459274 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.002286  459274 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.002387  459274 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.002492  459274 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.002583  459274 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:00:20.002731  459274 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.002849  459274 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.006209  459274 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:00:20.006211  459274 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.006316  459274 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.006384  459274 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.006543  459274 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.006693  459274 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.006814  459274 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:00:20.006939  459274 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.350937  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:00:20.351055  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:00:20.362932  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:00:20.363042  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.366861  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:00:20.366946  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.367289  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:00:20.367338  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.367637  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:00:20.367718  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.369569  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:00:20.369621  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.371982  459274 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:00:20.372053  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.430973  459274 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:00:20.431020  459274 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:00:20.431079  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.432405  459274 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:00:20.432440  459274 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.432483  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.483483  459274 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:00:20.483576  459274 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.483665  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.483772  459274 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:00:20.483810  459274 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.483871  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.503864  459274 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:00:20.503951  459274 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.504032  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.504144  459274 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:00:20.504192  459274 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.504235  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.504315  459274 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:00:20.504362  459274 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.504402  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:20.504512  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:00:20.504620  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.509515  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.511027  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.604291  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.604473  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.604514  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.604541  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.604648  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.604800  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:00:20.656508  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:20.820579  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:00:20.820684  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:20.820760  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:20.820798  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:00:20.820824  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:20.820844  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:00:20.848493  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:00:21.022312  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:00:21.022418  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:00:21.022474  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:00:21.022518  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:00:21.022572  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:00:21.022624  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:00:21.022661  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:00:21.022705  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:00:21.022756  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:00:21.022798  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:00:21.022839  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:00:21.112109  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:00:21.112190  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:00:21.112297  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:00:21.112341  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:00:21.112439  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:00:21.112471  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:00:21.112563  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:00:21.112684  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:00:21.112773  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:00:21.112858  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:00:21.112946  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:00:21.112979  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:00:21.113075  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:00:21.113160  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:00:21.174992  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:00:21.175066  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:00:21.175099  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:00:21.175018  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:00:21.175153  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:00:21.175101  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	W1202 22:00:21.256195  459274 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:00:21.256374  459274 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:00:21.256447  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:00:21.274971  459274 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:00:21.275042  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1202 22:00:21.578852  459274 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:00:21.578975  459274 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:00:21.579076  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:21.811847  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:00:21.811971  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:00:22.036878  459274 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:00:22.037011  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:00:22.050530  459274 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:00:22.050628  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:00:22.096053  459274 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:00:22.096108  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:00:23.681547  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.630889342s)
	I1202 22:00:23.681588  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:00:23.681611  459274 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:00:23.681690  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:00:24.958616  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.276895842s)
	I1202 22:00:24.958643  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:00:24.958679  459274 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:00:24.958747  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:00:25.979401  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.020625407s)
	I1202 22:00:25.979424  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:00:25.979441  459274 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:00:25.979491  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:00:27.357743  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.378229389s)
	I1202 22:00:27.357769  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:00:27.357788  459274 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:00:27.357834  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:00:28.603009  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.245135134s)
	I1202 22:00:28.603034  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:00:28.603053  459274 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:00:28.603101  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:00:29.656790  459274 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.053660931s)
	I1202 22:00:29.656816  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:00:29.656833  459274 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:00:29.656883  459274 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:00:30.071312  459274 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:00:30.071348  459274 cache_images.go:125] Successfully loaded all cached images
	I1202 22:00:30.071357  459274 cache_images.go:94] duration metric: took 10.06946652s to LoadCachedImages
	I1202 22:00:30.071370  459274 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:00:30.071485  459274 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-578337 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-578337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:00:30.071562  459274 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:00:30.106119  459274 cni.go:84] Creating CNI manager for ""
	I1202 22:00:30.106151  459274 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:00:30.106175  459274 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:00:30.106198  459274 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-578337 NodeName:kubernetes-upgrade-578337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/
certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:00:30.106341  459274 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kubernetes-upgrade-578337"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:00:30.106427  459274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:00:30.115553  459274 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:00:30.115630  459274 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:00:30.128052  459274 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:00:30.128122  459274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:00:30.128052  459274 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:00:30.128299  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:00:30.128212  459274 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:00:30.128390  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:00:30.153973  459274 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:00:30.154013  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:00:30.154144  459274 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:00:30.154196  459274 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:00:30.154213  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:00:30.180892  459274 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:00:30.180953  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:00:31.145711  459274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:00:31.155481  459274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes)
	I1202 22:00:31.169166  459274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:00:31.182246  459274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I1202 22:00:31.194937  459274 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:00:31.199720  459274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:00:31.210330  459274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:00:31.328555  459274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:00:31.344855  459274 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337 for IP: 192.168.76.2
	I1202 22:00:31.344877  459274 certs.go:195] generating shared ca certs ...
	I1202 22:00:31.344895  459274 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:00:31.345033  459274 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:00:31.345081  459274 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:00:31.345093  459274 certs.go:257] generating profile certs ...
	I1202 22:00:31.345189  459274 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.key
	I1202 22:00:31.345259  459274 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/apiserver.key.cbae4953
	I1202 22:00:31.345307  459274 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/proxy-client.key
	I1202 22:00:31.345422  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:00:31.345459  459274 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:00:31.345473  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:00:31.345503  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:00:31.345532  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:00:31.345560  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:00:31.345608  459274 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:00:31.346294  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:00:31.365760  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:00:31.386825  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:00:31.405036  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:00:31.426218  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1202 22:00:31.443815  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 22:00:31.460811  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:00:31.480129  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:00:31.500355  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:00:31.517715  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:00:31.535383  459274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:00:31.552392  459274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:00:31.564943  459274 ssh_runner.go:195] Run: openssl version
	I1202 22:00:31.572225  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:00:31.580448  459274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:00:31.584365  459274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:00:31.584483  459274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:00:31.627055  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:00:31.635103  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:00:31.643263  459274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:00:31.647081  459274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:00:31.647200  459274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:00:31.688073  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:00:31.696304  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:00:31.704626  459274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:00:31.708662  459274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:00:31.708725  459274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:00:31.749630  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:00:31.757705  459274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:00:31.761514  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:00:31.802294  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:00:31.843213  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:00:31.883876  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:00:31.925536  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:00:31.966613  459274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:00:32.008430  459274 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-578337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-578337 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:00:32.008523  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:00:32.008607  459274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:00:32.036892  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:00:32.036914  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:00:32.036919  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:00:32.036923  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:00:32.036926  459274 cri.go:89] found id: ""
	I1202 22:00:32.036977  459274 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1202 22:00:32.061034  459274 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-02T22:00:32Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1202 22:00:32.061115  459274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:00:32.070905  459274 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:00:32.070925  459274 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:00:32.070990  459274 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:00:32.080352  459274 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:00:32.080896  459274 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-578337" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:00:32.081136  459274 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-578337" cluster setting kubeconfig missing "kubernetes-upgrade-578337" context setting]
	I1202 22:00:32.081594  459274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:00:32.082305  459274 kapi.go:59] client config for kubernetes-upgrade-578337: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.key", CAFile:"/home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb33c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 22:00:32.082826  459274 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 22:00:32.082843  459274 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 22:00:32.082856  459274 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 22:00:32.082867  459274 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 22:00:32.082872  459274 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 22:00:32.084605  459274 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:00:32.098161  459274 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-02 21:59:53.457575378 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-02 22:00:31.192522223 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///run/containerd/containerd.sock
	   name: "kubernetes-upgrade-578337"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1202 22:00:32.098182  459274 kubeadm.go:1161] stopping kube-system containers ...
	I1202 22:00:32.098194  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1202 22:00:32.098250  459274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:00:32.128705  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:00:32.128725  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:00:32.128730  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:00:32.128733  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:00:32.128736  459274 cri.go:89] found id: ""
	I1202 22:00:32.128741  459274 cri.go:252] Stopping containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:00:32.128804  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:00:32.133458  459274 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3
	I1202 22:00:32.160994  459274 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 22:00:32.175792  459274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:00:32.183999  459274 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  2 21:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec  2 21:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  2 22:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec  2 21:59 /etc/kubernetes/scheduler.conf
	
	I1202 22:00:32.184069  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:00:32.191936  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:00:32.199502  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:00:32.206972  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:00:32.207038  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:00:32.214462  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:00:32.221857  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:00:32.221949  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:00:32.229275  459274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:00:32.239042  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:00:32.293715  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:00:33.944076  459274 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650330046s)
	I1202 22:00:33.944145  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:00:34.201110  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:00:34.311093  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 22:00:34.366722  459274 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:00:34.366861  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:34.867592  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:35.367691  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:35.867013  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:36.367666  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:36.867042  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:37.367840  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:37.867547  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:38.366987  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:38.867833  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:39.366937  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:39.867626  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:40.367080  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:40.867955  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:41.367826  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:41.867903  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:42.367586  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:42.867425  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:43.367585  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:43.867791  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:44.367700  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:44.867221  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:45.366915  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:45.867691  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:46.367458  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:46.867926  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:47.367751  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:47.867676  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:48.367474  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:48.867521  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:49.366988  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:49.867697  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:50.367933  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:50.867003  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:51.367441  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:51.866951  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:52.367044  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:52.867585  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:53.367503  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:53.866993  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:54.367690  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:54.867577  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:55.367327  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:55.867021  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:56.367807  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:56.867011  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:57.367035  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:57.867830  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:58.367471  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:58.867030  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:59.367719  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:00:59.867620  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:00.367031  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:00.867669  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:01.366953  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:01.867580  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:02.367402  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:02.867489  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:03.367504  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:03.867429  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:04.367514  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:04.866979  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:05.367747  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:05.867649  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:06.367859  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:06.867380  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:07.368865  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:07.867599  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:08.367064  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:08.867827  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:09.367870  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:09.866997  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:10.367067  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:10.867917  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:11.367774  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:11.867211  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:12.366916  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:12.867675  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:13.367426  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:13.867863  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:14.367653  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:14.866969  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:15.367878  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:15.867763  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:16.367948  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:16.867698  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:17.367183  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:17.867016  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:18.367422  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:18.867618  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:19.367646  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:19.867052  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:20.367668  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:20.867617  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:21.367760  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:21.867141  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:22.367710  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:22.867242  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:23.367425  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:23.867915  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:24.367619  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:24.867651  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:25.366987  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:25.866975  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:26.367025  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:26.867618  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:27.367114  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:27.868817  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:28.367541  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:28.868119  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:29.367550  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:29.866961  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:30.367611  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:30.867689  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:31.366976  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:31.866970  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:32.367662  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:32.867501  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:33.367239  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:33.867850  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:34.367601  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:34.367700  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:34.405280  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:34.405299  459274 cri.go:89] found id: ""
	I1202 22:01:34.405307  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:34.405363  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:34.409856  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:34.409927  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:34.435528  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:34.435601  459274 cri.go:89] found id: ""
	I1202 22:01:34.435613  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:34.435705  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:34.439758  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:34.439824  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:34.466211  459274 cri.go:89] found id: ""
	I1202 22:01:34.466233  459274 logs.go:282] 0 containers: []
	W1202 22:01:34.466243  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:34.466249  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:34.466305  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:34.499807  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:34.499824  459274 cri.go:89] found id: ""
	I1202 22:01:34.499832  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:34.499901  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:34.504156  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:34.504229  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:34.533174  459274 cri.go:89] found id: ""
	I1202 22:01:34.533196  459274 logs.go:282] 0 containers: []
	W1202 22:01:34.533205  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:34.533211  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:34.533278  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:34.572019  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:34.572109  459274 cri.go:89] found id: ""
	I1202 22:01:34.572132  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:34.572235  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:34.577262  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:34.577450  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:34.616165  459274 cri.go:89] found id: ""
	I1202 22:01:34.616245  459274 logs.go:282] 0 containers: []
	W1202 22:01:34.616270  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:34.616291  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:34.616404  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:34.656099  459274 cri.go:89] found id: ""
	I1202 22:01:34.656195  459274 logs.go:282] 0 containers: []
	W1202 22:01:34.656227  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:34.656270  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:34.656299  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:34.696871  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:34.696970  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:34.756797  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:34.756893  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:34.776319  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:34.776461  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:34.891253  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:34.891326  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:34.891376  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:34.948087  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:34.948163  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:34.996674  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:34.996826  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:35.075448  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:35.075487  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:35.129103  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:35.129138  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:37.687611  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:37.698650  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:37.698726  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:37.729724  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:37.729744  459274 cri.go:89] found id: ""
	I1202 22:01:37.729752  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:37.729804  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:37.734341  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:37.734405  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:37.764242  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:37.764264  459274 cri.go:89] found id: ""
	I1202 22:01:37.764272  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:37.764326  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:37.768779  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:37.768896  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:37.813945  459274 cri.go:89] found id: ""
	I1202 22:01:37.814026  459274 logs.go:282] 0 containers: []
	W1202 22:01:37.814050  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:37.814084  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:37.814186  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:37.848478  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:37.848547  459274 cri.go:89] found id: ""
	I1202 22:01:37.848585  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:37.848682  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:37.852701  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:37.852769  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:37.896565  459274 cri.go:89] found id: ""
	I1202 22:01:37.896588  459274 logs.go:282] 0 containers: []
	W1202 22:01:37.896596  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:37.896603  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:37.896666  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:37.935220  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:37.935289  459274 cri.go:89] found id: ""
	I1202 22:01:37.935300  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:37.935412  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:37.939990  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:37.940111  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:37.979075  459274 cri.go:89] found id: ""
	I1202 22:01:37.979148  459274 logs.go:282] 0 containers: []
	W1202 22:01:37.979172  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:37.979193  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:37.979285  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:38.006754  459274 cri.go:89] found id: ""
	I1202 22:01:38.006783  459274 logs.go:282] 0 containers: []
	W1202 22:01:38.006792  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:38.006817  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:38.006852  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:38.086121  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:38.086165  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:38.208488  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:38.208605  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:38.208671  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:38.266292  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:38.266353  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:38.301236  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:38.301276  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:38.343390  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:38.343468  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:38.378339  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:38.378371  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:38.415663  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:38.415759  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:38.432839  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:38.432911  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:40.962768  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:40.985890  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:40.985957  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:41.016762  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:41.016782  459274 cri.go:89] found id: ""
	I1202 22:01:41.016795  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:41.016854  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:41.020907  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:41.020971  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:41.048590  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:41.048608  459274 cri.go:89] found id: ""
	I1202 22:01:41.048616  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:41.048669  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:41.052597  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:41.052708  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:41.085715  459274 cri.go:89] found id: ""
	I1202 22:01:41.085736  459274 logs.go:282] 0 containers: []
	W1202 22:01:41.085745  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:41.085751  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:41.085809  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:41.115972  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:41.116053  459274 cri.go:89] found id: ""
	I1202 22:01:41.116075  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:41.116152  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:41.120550  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:41.120618  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:41.154977  459274 cri.go:89] found id: ""
	I1202 22:01:41.154998  459274 logs.go:282] 0 containers: []
	W1202 22:01:41.155007  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:41.155013  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:41.155071  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:41.182265  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:41.182285  459274 cri.go:89] found id: ""
	I1202 22:01:41.182293  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:41.182358  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:41.186782  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:41.186852  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:41.222988  459274 cri.go:89] found id: ""
	I1202 22:01:41.223010  459274 logs.go:282] 0 containers: []
	W1202 22:01:41.223019  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:41.223026  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:41.223094  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:41.256100  459274 cri.go:89] found id: ""
	I1202 22:01:41.256181  459274 logs.go:282] 0 containers: []
	W1202 22:01:41.256205  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:41.256243  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:41.256282  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:41.327370  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:41.327408  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:41.351371  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:41.351404  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:41.467064  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:41.467086  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:41.467099  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:41.518027  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:41.518060  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:41.556742  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:41.556768  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:41.602698  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:41.602745  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:41.650750  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:41.650825  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:41.695250  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:41.695284  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:44.238607  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:44.248875  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:44.248945  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:44.286124  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:44.286145  459274 cri.go:89] found id: ""
	I1202 22:01:44.286154  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:44.286210  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:44.310095  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:44.310169  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:44.375191  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:44.375214  459274 cri.go:89] found id: ""
	I1202 22:01:44.375223  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:44.375280  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:44.381216  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:44.381290  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:44.435914  459274 cri.go:89] found id: ""
	I1202 22:01:44.435939  459274 logs.go:282] 0 containers: []
	W1202 22:01:44.435947  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:44.435954  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:44.436016  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:44.475123  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:44.475144  459274 cri.go:89] found id: ""
	I1202 22:01:44.475152  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:44.475218  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:44.479944  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:44.480012  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:44.524182  459274 cri.go:89] found id: ""
	I1202 22:01:44.524209  459274 logs.go:282] 0 containers: []
	W1202 22:01:44.524217  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:44.524225  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:44.524285  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:44.565039  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:44.565061  459274 cri.go:89] found id: ""
	I1202 22:01:44.565070  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:44.565126  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:44.569152  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:44.569217  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:44.615694  459274 cri.go:89] found id: ""
	I1202 22:01:44.615716  459274 logs.go:282] 0 containers: []
	W1202 22:01:44.615725  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:44.615734  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:44.615795  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:44.659483  459274 cri.go:89] found id: ""
	I1202 22:01:44.659510  459274 logs.go:282] 0 containers: []
	W1202 22:01:44.659519  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:44.659535  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:44.659546  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:44.748994  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:44.749030  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:44.777321  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:44.777350  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:44.885101  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:44.885135  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:44.885149  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:44.953364  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:44.953396  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:45.000152  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:45.000184  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:45.084174  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:45.084226  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:45.186826  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:45.186871  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:45.329968  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:45.330009  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:47.883176  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:47.893836  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:47.893906  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:47.927568  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:47.927591  459274 cri.go:89] found id: ""
	I1202 22:01:47.927599  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:47.927654  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:47.931701  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:47.931774  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:47.966414  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:47.966440  459274 cri.go:89] found id: ""
	I1202 22:01:47.966448  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:47.966508  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:47.972414  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:47.972487  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:48.003785  459274 cri.go:89] found id: ""
	I1202 22:01:48.003815  459274 logs.go:282] 0 containers: []
	W1202 22:01:48.003824  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:48.003832  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:48.003904  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:48.038123  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:48.038191  459274 cri.go:89] found id: ""
	I1202 22:01:48.038203  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:48.038266  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:48.043043  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:48.043162  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:48.083309  459274 cri.go:89] found id: ""
	I1202 22:01:48.083331  459274 logs.go:282] 0 containers: []
	W1202 22:01:48.083339  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:48.083346  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:48.083406  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:48.118213  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:48.118233  459274 cri.go:89] found id: ""
	I1202 22:01:48.118241  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:48.118298  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:48.122582  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:48.122656  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:48.152357  459274 cri.go:89] found id: ""
	I1202 22:01:48.152386  459274 logs.go:282] 0 containers: []
	W1202 22:01:48.152395  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:48.152402  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:48.152464  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:48.183921  459274 cri.go:89] found id: ""
	I1202 22:01:48.183996  459274 logs.go:282] 0 containers: []
	W1202 22:01:48.184019  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:48.184048  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:48.184097  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:48.230136  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:48.230209  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:48.289388  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:48.289462  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:48.382683  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:48.382713  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:48.478026  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:48.478087  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:48.478116  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:48.518349  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:48.518388  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:48.564079  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:48.564106  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:48.642361  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:48.642439  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:48.660438  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:48.660465  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:51.198886  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:51.210648  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:51.210721  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:51.247056  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:51.247073  459274 cri.go:89] found id: ""
	I1202 22:01:51.247081  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:51.247136  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:51.257853  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:51.257925  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:51.341368  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:51.341397  459274 cri.go:89] found id: ""
	I1202 22:01:51.341406  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:51.341464  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:51.369906  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:51.369976  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:51.428802  459274 cri.go:89] found id: ""
	I1202 22:01:51.428824  459274 logs.go:282] 0 containers: []
	W1202 22:01:51.428832  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:51.428839  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:51.428900  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:51.474986  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:51.475004  459274 cri.go:89] found id: ""
	I1202 22:01:51.475012  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:51.475066  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:51.488637  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:51.488722  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:51.525432  459274 cri.go:89] found id: ""
	I1202 22:01:51.525455  459274 logs.go:282] 0 containers: []
	W1202 22:01:51.525464  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:51.525471  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:51.525528  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:51.569483  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:51.569501  459274 cri.go:89] found id: ""
	I1202 22:01:51.569508  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:51.569562  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:51.574009  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:51.574131  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:51.602696  459274 cri.go:89] found id: ""
	I1202 22:01:51.602717  459274 logs.go:282] 0 containers: []
	W1202 22:01:51.602725  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:51.602731  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:51.602788  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:51.665069  459274 cri.go:89] found id: ""
	I1202 22:01:51.665090  459274 logs.go:282] 0 containers: []
	W1202 22:01:51.665098  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:51.665112  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:51.665124  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:51.756326  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:51.756398  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:51.786590  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:51.786620  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:51.841087  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:51.841122  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:51.920250  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:51.920329  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:51.964743  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:51.964873  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:52.002950  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:52.003045  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:52.196352  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:52.196371  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:52.196384  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:52.248819  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:52.248889  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:54.808914  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:54.821732  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:54.821803  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:54.849238  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:54.849257  459274 cri.go:89] found id: ""
	I1202 22:01:54.849265  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:54.849317  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:54.856155  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:54.856226  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:54.886164  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:54.886182  459274 cri.go:89] found id: ""
	I1202 22:01:54.886190  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:54.886247  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:54.891827  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:54.891902  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:54.923310  459274 cri.go:89] found id: ""
	I1202 22:01:54.923332  459274 logs.go:282] 0 containers: []
	W1202 22:01:54.923340  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:54.923347  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:54.923409  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:54.989141  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:54.989159  459274 cri.go:89] found id: ""
	I1202 22:01:54.989167  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:54.989221  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:54.993543  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:54.993666  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:55.040829  459274 cri.go:89] found id: ""
	I1202 22:01:55.040854  459274 logs.go:282] 0 containers: []
	W1202 22:01:55.040903  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:55.040911  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:55.040998  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:55.083348  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:55.083371  459274 cri.go:89] found id: ""
	I1202 22:01:55.083380  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:55.083436  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:55.089227  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:55.089304  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:55.129126  459274 cri.go:89] found id: ""
	I1202 22:01:55.129152  459274 logs.go:282] 0 containers: []
	W1202 22:01:55.129161  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:55.129172  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:55.129230  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:55.164885  459274 cri.go:89] found id: ""
	I1202 22:01:55.164913  459274 logs.go:282] 0 containers: []
	W1202 22:01:55.164922  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:55.164935  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:55.164947  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:55.201349  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:55.201384  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:55.244943  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:55.245009  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:01:55.325070  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:55.325127  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:55.368072  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:55.368106  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:55.453644  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:55.453942  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:55.500708  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:55.500741  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:55.554725  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:55.554755  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:55.641585  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:55.641603  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:55.641617  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:58.180719  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:01:58.193475  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:01:58.193545  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:01:58.222496  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:58.222516  459274 cri.go:89] found id: ""
	I1202 22:01:58.222524  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:01:58.222581  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:58.227985  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:01:58.228061  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:01:58.265180  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:58.265199  459274 cri.go:89] found id: ""
	I1202 22:01:58.265207  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:01:58.265267  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:58.269251  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:01:58.269323  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:01:58.304195  459274 cri.go:89] found id: ""
	I1202 22:01:58.304217  459274 logs.go:282] 0 containers: []
	W1202 22:01:58.304225  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:01:58.304232  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:01:58.304291  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:01:58.339871  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:58.339895  459274 cri.go:89] found id: ""
	I1202 22:01:58.339903  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:01:58.339957  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:58.346641  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:01:58.346723  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:01:58.404882  459274 cri.go:89] found id: ""
	I1202 22:01:58.404905  459274 logs.go:282] 0 containers: []
	W1202 22:01:58.404913  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:01:58.404919  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:01:58.404976  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:01:58.445524  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:58.445547  459274 cri.go:89] found id: ""
	I1202 22:01:58.445556  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:01:58.445630  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:01:58.450314  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:01:58.450388  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:01:58.479532  459274 cri.go:89] found id: ""
	I1202 22:01:58.479554  459274 logs.go:282] 0 containers: []
	W1202 22:01:58.479562  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:01:58.479569  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:01:58.479633  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:01:58.504909  459274 cri.go:89] found id: ""
	I1202 22:01:58.504932  459274 logs.go:282] 0 containers: []
	W1202 22:01:58.504941  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:01:58.504954  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:01:58.504968  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:01:58.521875  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:01:58.521902  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:01:58.581239  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:01:58.581269  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:01:58.616701  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:01:58.616736  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:01:58.672975  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:01:58.673002  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:01:58.755839  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:01:58.755908  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:01:58.755954  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:01:58.817762  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:01:58.817848  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:01:58.853424  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:01:58.853457  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:01:58.883864  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:01:58.883893  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:01.451615  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:01.465297  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:01.465456  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:01.501110  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:01.501131  459274 cri.go:89] found id: ""
	I1202 22:02:01.501140  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:01.501199  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:01.506678  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:01.506779  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:01.551827  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:01.551845  459274 cri.go:89] found id: ""
	I1202 22:02:01.551853  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:01.551915  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:01.556582  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:01.556656  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:01.594066  459274 cri.go:89] found id: ""
	I1202 22:02:01.594087  459274 logs.go:282] 0 containers: []
	W1202 22:02:01.594095  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:01.594102  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:01.594164  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:01.628265  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:01.628284  459274 cri.go:89] found id: ""
	I1202 22:02:01.628293  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:01.628348  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:01.633735  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:01.633810  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:01.668705  459274 cri.go:89] found id: ""
	I1202 22:02:01.668732  459274 logs.go:282] 0 containers: []
	W1202 22:02:01.668741  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:01.668750  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:01.668814  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:01.714200  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:01.714230  459274 cri.go:89] found id: ""
	I1202 22:02:01.714239  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:01.714328  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:01.721104  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:01.721206  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:01.762787  459274 cri.go:89] found id: ""
	I1202 22:02:01.762815  459274 logs.go:282] 0 containers: []
	W1202 22:02:01.762824  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:01.762832  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:01.762974  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:01.802775  459274 cri.go:89] found id: ""
	I1202 22:02:01.802796  459274 logs.go:282] 0 containers: []
	W1202 22:02:01.802804  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:01.802845  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:01.802863  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:01.905690  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:01.905713  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:01.905728  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:01.983521  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:01.983565  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:02.028223  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:02.028257  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:02.085273  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:02.085313  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:02.124280  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:02.124310  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:02.190088  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:02.190118  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:02.238329  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:02.238360  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:02.281412  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:02.281438  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:04.859943  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:04.870857  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:04.870921  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:04.896410  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:04.896434  459274 cri.go:89] found id: ""
	I1202 22:02:04.896442  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:04.896496  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:04.900401  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:04.900475  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:04.929402  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:04.929428  459274 cri.go:89] found id: ""
	I1202 22:02:04.929437  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:04.929492  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:04.933572  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:04.933643  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:04.958397  459274 cri.go:89] found id: ""
	I1202 22:02:04.958464  459274 logs.go:282] 0 containers: []
	W1202 22:02:04.958480  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:04.958487  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:04.958551  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:04.983311  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:04.983333  459274 cri.go:89] found id: ""
	I1202 22:02:04.983341  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:04.983398  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:04.987518  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:04.987604  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:05.017937  459274 cri.go:89] found id: ""
	I1202 22:02:05.017963  459274 logs.go:282] 0 containers: []
	W1202 22:02:05.017972  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:05.017981  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:05.018059  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:05.061932  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:05.061950  459274 cri.go:89] found id: ""
	I1202 22:02:05.061957  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:05.062055  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:05.066432  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:05.066505  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:05.101325  459274 cri.go:89] found id: ""
	I1202 22:02:05.101348  459274 logs.go:282] 0 containers: []
	W1202 22:02:05.101357  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:05.101363  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:05.101430  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:05.135199  459274 cri.go:89] found id: ""
	I1202 22:02:05.135267  459274 logs.go:282] 0 containers: []
	W1202 22:02:05.135293  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:05.135322  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:05.135373  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:05.170586  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:05.170616  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:05.206119  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:05.206154  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:05.236494  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:05.236521  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:05.269005  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:05.269038  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:05.287249  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:05.287326  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:05.361891  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:05.361960  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:05.362001  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:05.421629  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:05.421721  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:05.479924  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:05.479949  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:08.058143  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:08.086443  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:08.086522  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:08.156058  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:08.156077  459274 cri.go:89] found id: ""
	I1202 22:02:08.156085  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:08.156140  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:08.161031  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:08.161110  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:08.220940  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:08.220960  459274 cri.go:89] found id: ""
	I1202 22:02:08.220968  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:08.221026  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:08.225236  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:08.225306  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:08.298389  459274 cri.go:89] found id: ""
	I1202 22:02:08.298410  459274 logs.go:282] 0 containers: []
	W1202 22:02:08.298419  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:08.298425  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:08.298483  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:08.335901  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:08.335920  459274 cri.go:89] found id: ""
	I1202 22:02:08.335928  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:08.335984  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:08.343933  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:08.344004  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:08.398402  459274 cri.go:89] found id: ""
	I1202 22:02:08.398423  459274 logs.go:282] 0 containers: []
	W1202 22:02:08.398432  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:08.398439  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:08.398503  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:08.443764  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:08.443843  459274 cri.go:89] found id: ""
	I1202 22:02:08.443867  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:08.443958  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:08.448104  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:08.448190  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:08.475206  459274 cri.go:89] found id: ""
	I1202 22:02:08.475227  459274 logs.go:282] 0 containers: []
	W1202 22:02:08.475235  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:08.475242  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:08.475301  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:08.518324  459274 cri.go:89] found id: ""
	I1202 22:02:08.518345  459274 logs.go:282] 0 containers: []
	W1202 22:02:08.518354  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:08.518367  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:08.518378  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:08.596378  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:08.596455  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:08.618520  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:08.618598  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:08.672491  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:08.672564  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:08.728554  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:08.728589  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:08.775615  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:08.775652  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:08.893454  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:08.893477  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:08.893493  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:08.962899  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:08.962935  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:09.020994  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:09.021027  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:11.644204  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:11.655319  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:11.655403  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:11.702249  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:11.702273  459274 cri.go:89] found id: ""
	I1202 22:02:11.702282  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:11.702336  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:11.706606  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:11.706695  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:11.770857  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:11.770878  459274 cri.go:89] found id: ""
	I1202 22:02:11.770886  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:11.770942  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:11.778163  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:11.778253  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:11.836318  459274 cri.go:89] found id: ""
	I1202 22:02:11.836344  459274 logs.go:282] 0 containers: []
	W1202 22:02:11.836365  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:11.836372  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:11.836444  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:11.870847  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:11.870878  459274 cri.go:89] found id: ""
	I1202 22:02:11.870887  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:11.870962  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:11.876640  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:11.876724  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:11.934214  459274 cri.go:89] found id: ""
	I1202 22:02:11.934247  459274 logs.go:282] 0 containers: []
	W1202 22:02:11.934261  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:11.934268  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:11.934328  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:11.980065  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:11.980090  459274 cri.go:89] found id: ""
	I1202 22:02:11.980098  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:11.980154  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:11.990191  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:11.990266  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:12.069640  459274 cri.go:89] found id: ""
	I1202 22:02:12.069675  459274 logs.go:282] 0 containers: []
	W1202 22:02:12.069685  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:12.069692  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:12.069755  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:12.163451  459274 cri.go:89] found id: ""
	I1202 22:02:12.163526  459274 logs.go:282] 0 containers: []
	W1202 22:02:12.163550  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:12.163588  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:12.163616  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:12.226908  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:12.227077  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:12.262104  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:12.262180  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:12.306308  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:12.306380  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:12.399875  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:12.399916  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:12.466255  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:12.466299  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:12.527912  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:12.527954  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:12.582208  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:12.582244  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:12.602418  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:12.602447  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:12.691494  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:15.192972  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:15.205605  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:15.205713  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:15.243907  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:15.243926  459274 cri.go:89] found id: ""
	I1202 22:02:15.243933  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:15.243989  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:15.248816  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:15.248882  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:15.279320  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:15.279391  459274 cri.go:89] found id: ""
	I1202 22:02:15.279413  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:15.279491  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:15.284463  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:15.284561  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:15.344258  459274 cri.go:89] found id: ""
	I1202 22:02:15.344279  459274 logs.go:282] 0 containers: []
	W1202 22:02:15.344287  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:15.344293  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:15.344351  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:15.407821  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:15.407839  459274 cri.go:89] found id: ""
	I1202 22:02:15.407846  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:15.407900  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:15.428163  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:15.428234  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:15.471983  459274 cri.go:89] found id: ""
	I1202 22:02:15.472057  459274 logs.go:282] 0 containers: []
	W1202 22:02:15.472089  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:15.472107  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:15.472191  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:15.527932  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:15.528005  459274 cri.go:89] found id: ""
	I1202 22:02:15.528028  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:15.528122  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:15.537521  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:15.537635  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:15.596659  459274 cri.go:89] found id: ""
	I1202 22:02:15.596686  459274 logs.go:282] 0 containers: []
	W1202 22:02:15.596695  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:15.596702  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:15.596811  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:15.644216  459274 cri.go:89] found id: ""
	I1202 22:02:15.644241  459274 logs.go:282] 0 containers: []
	W1202 22:02:15.644249  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:15.644264  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:15.644323  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:15.747188  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:15.747267  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:15.769609  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:15.769635  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:15.900599  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:15.900620  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:15.900633  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:15.937482  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:15.937510  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:15.973641  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:15.973728  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:16.054517  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:16.054551  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:16.098408  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:16.098438  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:16.146957  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:16.146996  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:18.687448  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:18.697481  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:18.697552  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:18.726072  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:18.726096  459274 cri.go:89] found id: ""
	I1202 22:02:18.726105  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:18.726161  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:18.730608  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:18.730681  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:18.768466  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:18.768488  459274 cri.go:89] found id: ""
	I1202 22:02:18.768496  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:18.768548  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:18.775416  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:18.775489  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:18.802117  459274 cri.go:89] found id: ""
	I1202 22:02:18.802138  459274 logs.go:282] 0 containers: []
	W1202 22:02:18.802146  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:18.802152  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:18.802207  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:18.848361  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:18.848382  459274 cri.go:89] found id: ""
	I1202 22:02:18.848390  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:18.848457  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:18.853987  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:18.854102  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:18.887658  459274 cri.go:89] found id: ""
	I1202 22:02:18.887683  459274 logs.go:282] 0 containers: []
	W1202 22:02:18.887692  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:18.887700  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:18.887776  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:18.929379  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:18.929411  459274 cri.go:89] found id: ""
	I1202 22:02:18.929420  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:18.929492  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:18.934953  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:18.935040  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:18.974794  459274 cri.go:89] found id: ""
	I1202 22:02:18.974825  459274 logs.go:282] 0 containers: []
	W1202 22:02:18.974834  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:18.974841  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:18.974910  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:19.012018  459274 cri.go:89] found id: ""
	I1202 22:02:19.012044  459274 logs.go:282] 0 containers: []
	W1202 22:02:19.012053  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:19.012066  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:19.012077  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:19.094793  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:19.094845  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:19.119353  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:19.119398  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:19.198196  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:19.198236  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:19.198256  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:19.243625  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:19.243664  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:19.296087  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:19.296115  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:19.346458  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:19.346527  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:19.405174  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:19.405259  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:19.453367  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:19.453399  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:22.006923  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:22.019212  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:22.019313  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:22.048330  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:22.048350  459274 cri.go:89] found id: ""
	I1202 22:02:22.048358  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:22.048417  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:22.053169  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:22.053242  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:22.080339  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:22.080364  459274 cri.go:89] found id: ""
	I1202 22:02:22.080377  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:22.080437  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:22.084872  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:22.084961  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:22.117026  459274 cri.go:89] found id: ""
	I1202 22:02:22.117047  459274 logs.go:282] 0 containers: []
	W1202 22:02:22.117061  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:22.117067  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:22.117129  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:22.148121  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:22.148144  459274 cri.go:89] found id: ""
	I1202 22:02:22.148151  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:22.148212  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:22.152915  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:22.152995  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:22.180512  459274 cri.go:89] found id: ""
	I1202 22:02:22.180537  459274 logs.go:282] 0 containers: []
	W1202 22:02:22.180545  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:22.180552  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:22.180611  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:22.207547  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:22.207568  459274 cri.go:89] found id: ""
	I1202 22:02:22.207576  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:22.207628  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:22.211434  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:22.211505  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:22.236667  459274 cri.go:89] found id: ""
	I1202 22:02:22.236688  459274 logs.go:282] 0 containers: []
	W1202 22:02:22.236696  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:22.236732  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:22.236791  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:22.261415  459274 cri.go:89] found id: ""
	I1202 22:02:22.261437  459274 logs.go:282] 0 containers: []
	W1202 22:02:22.261445  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:22.261458  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:22.261471  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:22.278645  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:22.278674  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:22.332612  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:22.332669  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:22.378237  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:22.378264  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:22.413184  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:22.413209  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:22.476343  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:22.476380  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:22.554845  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:22.554863  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:22.554903  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:22.604126  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:22.604159  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:22.647492  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:22.647525  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:25.190239  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:25.200702  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:25.200774  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:25.240981  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:25.241002  459274 cri.go:89] found id: ""
	I1202 22:02:25.241010  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:25.241067  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:25.245686  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:25.245787  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:25.274207  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:25.274228  459274 cri.go:89] found id: ""
	I1202 22:02:25.274237  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:25.274292  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:25.278102  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:25.278175  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:25.317794  459274 cri.go:89] found id: ""
	I1202 22:02:25.317812  459274 logs.go:282] 0 containers: []
	W1202 22:02:25.317821  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:25.317827  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:25.317915  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:25.359541  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:25.359560  459274 cri.go:89] found id: ""
	I1202 22:02:25.359568  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:25.359625  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:25.363774  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:25.363844  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:25.392660  459274 cri.go:89] found id: ""
	I1202 22:02:25.392683  459274 logs.go:282] 0 containers: []
	W1202 22:02:25.392691  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:25.392699  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:25.392761  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:25.422016  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:25.422040  459274 cri.go:89] found id: ""
	I1202 22:02:25.422048  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:25.422107  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:25.426107  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:25.426185  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:25.455603  459274 cri.go:89] found id: ""
	I1202 22:02:25.455625  459274 logs.go:282] 0 containers: []
	W1202 22:02:25.455634  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:25.455646  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:25.455709  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:25.483434  459274 cri.go:89] found id: ""
	I1202 22:02:25.483456  459274 logs.go:282] 0 containers: []
	W1202 22:02:25.483464  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:25.483478  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:25.483491  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:25.521358  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:25.521386  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:25.550380  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:25.550407  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:25.610818  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:25.610856  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:25.628085  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:25.628117  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:25.664420  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:25.664459  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:25.705915  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:25.705948  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:25.741112  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:25.741149  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:25.770622  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:25.770645  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:25.837935  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:28.338400  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:28.350426  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:28.350495  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:28.378879  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:28.378902  459274 cri.go:89] found id: ""
	I1202 22:02:28.378910  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:28.378967  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:28.382892  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:28.382970  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:28.408475  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:28.408497  459274 cri.go:89] found id: ""
	I1202 22:02:28.408505  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:28.408563  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:28.412625  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:28.412694  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:28.440169  459274 cri.go:89] found id: ""
	I1202 22:02:28.440192  459274 logs.go:282] 0 containers: []
	W1202 22:02:28.440200  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:28.440206  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:28.440270  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:28.466809  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:28.466830  459274 cri.go:89] found id: ""
	I1202 22:02:28.466838  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:28.466905  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:28.471296  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:28.471379  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:28.496732  459274 cri.go:89] found id: ""
	I1202 22:02:28.496755  459274 logs.go:282] 0 containers: []
	W1202 22:02:28.496764  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:28.496771  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:28.496830  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:28.528843  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:28.528865  459274 cri.go:89] found id: ""
	I1202 22:02:28.528873  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:28.528929  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:28.532856  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:28.532926  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:28.566386  459274 cri.go:89] found id: ""
	I1202 22:02:28.566410  459274 logs.go:282] 0 containers: []
	W1202 22:02:28.566419  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:28.566431  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:28.566490  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:28.592563  459274 cri.go:89] found id: ""
	I1202 22:02:28.592585  459274 logs.go:282] 0 containers: []
	W1202 22:02:28.592593  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:28.592611  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:28.592622  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:28.665410  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:28.665430  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:28.665442  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:28.707940  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:28.707973  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:28.744600  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:28.744643  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:28.772040  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:28.772071  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:28.807695  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:28.807730  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:28.874572  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:28.874609  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:28.892773  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:28.892796  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:28.929148  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:28.929183  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:31.460915  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:31.472621  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:31.472688  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:31.509044  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:31.509132  459274 cri.go:89] found id: ""
	I1202 22:02:31.509144  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:31.509210  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:31.513648  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:31.513732  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:31.550806  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:31.550825  459274 cri.go:89] found id: ""
	I1202 22:02:31.550833  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:31.550890  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:31.554705  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:31.554794  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:31.585042  459274 cri.go:89] found id: ""
	I1202 22:02:31.585064  459274 logs.go:282] 0 containers: []
	W1202 22:02:31.585072  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:31.585078  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:31.585141  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:31.616391  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:31.616409  459274 cri.go:89] found id: ""
	I1202 22:02:31.616417  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:31.616471  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:31.620747  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:31.620835  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:31.650423  459274 cri.go:89] found id: ""
	I1202 22:02:31.650445  459274 logs.go:282] 0 containers: []
	W1202 22:02:31.650454  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:31.650461  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:31.650516  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:31.684497  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:31.684518  459274 cri.go:89] found id: ""
	I1202 22:02:31.684584  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:31.684644  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:31.688843  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:31.688917  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:31.727674  459274 cri.go:89] found id: ""
	I1202 22:02:31.727696  459274 logs.go:282] 0 containers: []
	W1202 22:02:31.727704  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:31.727710  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:31.727767  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:31.763730  459274 cri.go:89] found id: ""
	I1202 22:02:31.763752  459274 logs.go:282] 0 containers: []
	W1202 22:02:31.763760  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:31.763773  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:31.763790  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:31.839753  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:31.839879  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:31.857342  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:31.857488  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:31.907907  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:31.907946  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:31.941583  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:31.941613  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:31.974949  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:31.975027  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:32.056390  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:32.056412  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:32.056427  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:32.109417  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:32.109490  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:32.146215  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:32.146249  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:34.695538  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:34.706354  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:34.706421  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:34.733573  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:34.733597  459274 cri.go:89] found id: ""
	I1202 22:02:34.733605  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:34.733677  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:34.737983  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:34.738061  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:34.764010  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:34.764027  459274 cri.go:89] found id: ""
	I1202 22:02:34.764035  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:34.764094  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:34.768445  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:34.768517  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:34.794751  459274 cri.go:89] found id: ""
	I1202 22:02:34.794772  459274 logs.go:282] 0 containers: []
	W1202 22:02:34.794780  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:34.794787  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:34.794843  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:34.820619  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:34.820638  459274 cri.go:89] found id: ""
	I1202 22:02:34.820646  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:34.820699  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:34.825001  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:34.825068  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:34.849672  459274 cri.go:89] found id: ""
	I1202 22:02:34.849731  459274 logs.go:282] 0 containers: []
	W1202 22:02:34.849762  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:34.849781  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:34.849870  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:34.885000  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:34.885017  459274 cri.go:89] found id: ""
	I1202 22:02:34.885025  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:34.885082  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:34.889969  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:34.890049  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:34.934765  459274 cri.go:89] found id: ""
	I1202 22:02:34.934786  459274 logs.go:282] 0 containers: []
	W1202 22:02:34.934793  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:34.934800  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:34.934857  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:34.979738  459274 cri.go:89] found id: ""
	I1202 22:02:34.979761  459274 logs.go:282] 0 containers: []
	W1202 22:02:34.979769  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:34.979782  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:34.979799  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:34.996456  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:34.996533  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:35.108966  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:35.108983  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:35.109002  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:35.167733  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:35.167809  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:35.231511  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:35.231549  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:35.270975  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:35.271063  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:35.304107  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:35.304187  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:35.338098  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:35.338134  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:35.404096  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:35.404124  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:37.962414  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:37.975299  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:37.975364  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:38.024860  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:38.024882  459274 cri.go:89] found id: ""
	I1202 22:02:38.024892  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:38.024954  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:38.034253  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:38.034338  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:38.077735  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:38.077813  459274 cri.go:89] found id: ""
	I1202 22:02:38.077834  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:38.077917  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:38.083927  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:38.084006  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:38.121415  459274 cri.go:89] found id: ""
	I1202 22:02:38.121438  459274 logs.go:282] 0 containers: []
	W1202 22:02:38.121446  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:38.121461  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:38.121522  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:38.153149  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:38.153168  459274 cri.go:89] found id: ""
	I1202 22:02:38.153176  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:38.153234  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:38.164304  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:38.164376  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:38.206395  459274 cri.go:89] found id: ""
	I1202 22:02:38.206478  459274 logs.go:282] 0 containers: []
	W1202 22:02:38.206501  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:38.206542  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:38.206635  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:38.242850  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:38.242923  459274 cri.go:89] found id: ""
	I1202 22:02:38.242955  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:38.243035  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:38.250579  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:38.250700  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:38.284638  459274 cri.go:89] found id: ""
	I1202 22:02:38.284713  459274 logs.go:282] 0 containers: []
	W1202 22:02:38.284739  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:38.284766  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:38.284861  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:38.321301  459274 cri.go:89] found id: ""
	I1202 22:02:38.321372  459274 logs.go:282] 0 containers: []
	W1202 22:02:38.321395  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:38.321422  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:38.321464  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:38.361481  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:38.361549  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:38.387798  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:38.387870  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:38.498946  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:38.498967  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:38.498981  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:38.555018  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:38.555046  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:38.627979  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:38.628015  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:38.669346  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:38.669382  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:38.710013  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:38.710048  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:38.758347  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:38.758378  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:41.292615  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:41.309918  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:41.309989  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:41.360899  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:41.360923  459274 cri.go:89] found id: ""
	I1202 22:02:41.360932  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:41.360990  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:41.370890  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:41.370966  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:41.399944  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:41.399964  459274 cri.go:89] found id: ""
	I1202 22:02:41.399973  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:41.400026  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:41.404175  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:41.404246  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:41.442355  459274 cri.go:89] found id: ""
	I1202 22:02:41.442379  459274 logs.go:282] 0 containers: []
	W1202 22:02:41.442387  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:41.442394  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:41.442453  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:41.500181  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:41.500219  459274 cri.go:89] found id: ""
	I1202 22:02:41.500229  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:41.500292  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:41.504809  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:41.504892  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:41.541545  459274 cri.go:89] found id: ""
	I1202 22:02:41.541578  459274 logs.go:282] 0 containers: []
	W1202 22:02:41.541587  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:41.541593  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:41.541675  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:41.576047  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:41.576072  459274 cri.go:89] found id: ""
	I1202 22:02:41.576081  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:41.576135  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:41.580493  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:41.580571  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:41.623297  459274 cri.go:89] found id: ""
	I1202 22:02:41.623323  459274 logs.go:282] 0 containers: []
	W1202 22:02:41.623332  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:41.623339  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:41.623395  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:41.649218  459274 cri.go:89] found id: ""
	I1202 22:02:41.649245  459274 logs.go:282] 0 containers: []
	W1202 22:02:41.649254  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:41.649268  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:41.649280  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:41.714302  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:41.714340  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:41.732714  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:41.732743  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:41.820412  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:41.820458  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:41.820480  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:41.858434  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:41.858465  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:41.912221  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:41.912259  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:41.955056  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:41.955097  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:41.987838  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:41.987867  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:42.030327  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:42.030371  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:44.612319  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:44.623016  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:44.623078  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:44.652147  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:44.652168  459274 cri.go:89] found id: ""
	I1202 22:02:44.652177  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:44.652268  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:44.657808  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:44.657891  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:44.700237  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:44.700257  459274 cri.go:89] found id: ""
	I1202 22:02:44.700274  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:44.700343  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:44.706312  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:44.706462  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:44.739443  459274 cri.go:89] found id: ""
	I1202 22:02:44.739482  459274 logs.go:282] 0 containers: []
	W1202 22:02:44.739491  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:44.739497  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:44.739567  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:44.776553  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:44.776582  459274 cri.go:89] found id: ""
	I1202 22:02:44.776591  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:44.776672  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:44.781993  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:44.782088  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:44.865863  459274 cri.go:89] found id: ""
	I1202 22:02:44.865953  459274 logs.go:282] 0 containers: []
	W1202 22:02:44.865976  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:44.866052  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:44.866187  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:44.931719  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:44.931748  459274 cri.go:89] found id: ""
	I1202 22:02:44.931757  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:44.931822  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:44.936309  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:44.936380  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:44.964054  459274 cri.go:89] found id: ""
	I1202 22:02:44.964074  459274 logs.go:282] 0 containers: []
	W1202 22:02:44.964083  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:44.964089  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:44.964154  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:44.995970  459274 cri.go:89] found id: ""
	I1202 22:02:44.995991  459274 logs.go:282] 0 containers: []
	W1202 22:02:44.995999  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:44.996014  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:44.996026  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:45.200299  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:45.200319  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:45.200332  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:45.287539  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:45.287740  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:45.338135  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:45.338212  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:45.384646  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:45.384722  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:45.427458  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:45.427585  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:45.470551  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:45.470581  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:45.509113  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:45.509150  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:45.607751  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:45.607792  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:48.131256  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:48.146750  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:48.146826  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:48.200096  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:48.200115  459274 cri.go:89] found id: ""
	I1202 22:02:48.200124  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:48.200180  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:48.204770  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:48.204840  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:48.231864  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:48.231881  459274 cri.go:89] found id: ""
	I1202 22:02:48.231889  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:48.231942  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:48.236213  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:48.236278  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:48.277020  459274 cri.go:89] found id: ""
	I1202 22:02:48.277041  459274 logs.go:282] 0 containers: []
	W1202 22:02:48.277049  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:48.277055  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:48.277113  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:48.308109  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:48.308127  459274 cri.go:89] found id: ""
	I1202 22:02:48.308135  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:48.308195  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:48.312833  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:48.312959  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:48.351682  459274 cri.go:89] found id: ""
	I1202 22:02:48.351703  459274 logs.go:282] 0 containers: []
	W1202 22:02:48.351711  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:48.351719  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:48.351790  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:48.381535  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:48.381553  459274 cri.go:89] found id: ""
	I1202 22:02:48.381561  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:48.381614  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:48.386594  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:48.386661  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:48.415853  459274 cri.go:89] found id: ""
	I1202 22:02:48.415876  459274 logs.go:282] 0 containers: []
	W1202 22:02:48.415885  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:48.415892  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:48.415956  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:48.450530  459274 cri.go:89] found id: ""
	I1202 22:02:48.450554  459274 logs.go:282] 0 containers: []
	W1202 22:02:48.450563  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:48.450579  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:48.450590  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:48.514081  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:48.514117  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:48.531048  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:48.531080  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:48.577710  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:48.577805  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:48.637699  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:48.637803  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:48.700341  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:48.700376  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:48.801260  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:48.801283  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:48.801297  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:48.878847  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:48.878875  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:48.931930  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:48.932023  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:51.466629  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:51.476872  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:51.476951  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:51.507224  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:51.507247  459274 cri.go:89] found id: ""
	I1202 22:02:51.507255  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:51.507320  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:51.511966  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:51.512046  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:51.540380  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:51.540409  459274 cri.go:89] found id: ""
	I1202 22:02:51.540421  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:51.540509  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:51.554718  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:51.554821  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:51.610630  459274 cri.go:89] found id: ""
	I1202 22:02:51.610670  459274 logs.go:282] 0 containers: []
	W1202 22:02:51.610679  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:51.610686  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:51.610756  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:51.655672  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:51.655735  459274 cri.go:89] found id: ""
	I1202 22:02:51.655758  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:51.655846  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:51.664311  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:51.664401  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:51.696289  459274 cri.go:89] found id: ""
	I1202 22:02:51.696322  459274 logs.go:282] 0 containers: []
	W1202 22:02:51.696332  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:51.696339  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:51.696412  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:51.735496  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:51.735532  459274 cri.go:89] found id: ""
	I1202 22:02:51.735540  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:51.735613  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:51.740685  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:51.740803  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:51.771335  459274 cri.go:89] found id: ""
	I1202 22:02:51.771382  459274 logs.go:282] 0 containers: []
	W1202 22:02:51.771391  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:51.771397  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:51.771473  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:51.808573  459274 cri.go:89] found id: ""
	I1202 22:02:51.808600  459274 logs.go:282] 0 containers: []
	W1202 22:02:51.808630  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:51.808645  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:51.808663  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:51.919664  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:51.919717  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:51.951892  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:51.951929  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:52.041645  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:52.041691  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:52.041708  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:52.100326  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:52.100362  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:52.137378  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:52.137410  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:52.182477  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:52.182514  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:52.217402  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:52.217440  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:52.250506  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:52.250538  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:54.786038  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:54.800511  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:54.800586  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:54.849244  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:54.849269  459274 cri.go:89] found id: ""
	I1202 22:02:54.849277  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:54.849331  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:54.858314  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:54.858393  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:54.898712  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:54.898737  459274 cri.go:89] found id: ""
	I1202 22:02:54.898745  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:54.898800  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:54.906291  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:54.906378  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:54.948204  459274 cri.go:89] found id: ""
	I1202 22:02:54.948230  459274 logs.go:282] 0 containers: []
	W1202 22:02:54.948252  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:54.948261  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:54.948378  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:54.995667  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:54.995692  459274 cri.go:89] found id: ""
	I1202 22:02:54.995700  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:54.995753  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:55.006084  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:55.006185  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:55.072037  459274 cri.go:89] found id: ""
	I1202 22:02:55.072064  459274 logs.go:282] 0 containers: []
	W1202 22:02:55.072073  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:55.072080  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:55.072142  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:55.161000  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:55.161023  459274 cri.go:89] found id: ""
	I1202 22:02:55.161032  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:55.161089  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:55.169341  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:55.169425  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:55.222292  459274 cri.go:89] found id: ""
	I1202 22:02:55.222317  459274 logs.go:282] 0 containers: []
	W1202 22:02:55.222326  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:55.222333  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:55.222392  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:55.267607  459274 cri.go:89] found id: ""
	I1202 22:02:55.267640  459274 logs.go:282] 0 containers: []
	W1202 22:02:55.267649  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:55.267664  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:55.267678  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:55.293488  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:55.293517  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:55.397176  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:55.397203  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:55.397217  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:55.458287  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:55.458325  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:55.540087  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:55.540120  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:55.580733  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:55.580765  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:02:55.651865  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:55.651935  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:55.726834  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:55.726872  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:55.768726  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:55.768763  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:58.335058  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:02:58.346972  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:02:58.347046  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:02:58.399749  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:58.399774  459274 cri.go:89] found id: ""
	I1202 22:02:58.399786  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:02:58.399844  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:58.404229  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:02:58.404304  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:02:58.434724  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:58.434742  459274 cri.go:89] found id: ""
	I1202 22:02:58.434750  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:02:58.434804  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:58.439120  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:02:58.439187  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:02:58.472216  459274 cri.go:89] found id: ""
	I1202 22:02:58.472238  459274 logs.go:282] 0 containers: []
	W1202 22:02:58.472246  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:02:58.472254  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:02:58.472312  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:02:58.507152  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:58.507170  459274 cri.go:89] found id: ""
	I1202 22:02:58.507178  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:02:58.507234  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:58.511749  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:02:58.511817  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:02:58.543951  459274 cri.go:89] found id: ""
	I1202 22:02:58.543973  459274 logs.go:282] 0 containers: []
	W1202 22:02:58.543982  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:02:58.543989  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:02:58.544048  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:02:58.578140  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:58.578158  459274 cri.go:89] found id: ""
	I1202 22:02:58.578165  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:02:58.578223  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:02:58.582575  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:02:58.582644  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:02:58.615001  459274 cri.go:89] found id: ""
	I1202 22:02:58.615024  459274 logs.go:282] 0 containers: []
	W1202 22:02:58.615033  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:02:58.615039  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:02:58.615099  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:02:58.662340  459274 cri.go:89] found id: ""
	I1202 22:02:58.662361  459274 logs.go:282] 0 containers: []
	W1202 22:02:58.662370  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:02:58.662384  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:02:58.662396  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:02:58.682245  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:02:58.682326  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:02:58.722384  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:02:58.722460  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:02:58.757082  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:02:58.757113  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:02:58.819756  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:02:58.819834  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:02:58.944049  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:02:58.944068  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:02:58.944081  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:02:58.979430  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:02:58.979505  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:02:59.041337  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:02:59.045732  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:02:59.121025  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:02:59.121163  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:01.671481  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:01.681875  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:01.681953  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:01.706966  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:01.706989  459274 cri.go:89] found id: ""
	I1202 22:03:01.706997  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:01.707055  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:01.711179  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:01.711254  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:01.742049  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:01.742069  459274 cri.go:89] found id: ""
	I1202 22:03:01.742079  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:01.742136  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:01.746219  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:01.746294  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:01.778156  459274 cri.go:89] found id: ""
	I1202 22:03:01.778197  459274 logs.go:282] 0 containers: []
	W1202 22:03:01.778206  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:01.778213  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:01.778272  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:01.881117  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:01.881179  459274 cri.go:89] found id: ""
	I1202 22:03:01.881188  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:01.881325  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:01.887211  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:01.887293  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:01.939394  459274 cri.go:89] found id: ""
	I1202 22:03:01.939440  459274 logs.go:282] 0 containers: []
	W1202 22:03:01.939449  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:01.939459  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:01.939601  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:01.988289  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:01.988319  459274 cri.go:89] found id: ""
	I1202 22:03:01.988329  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:01.988544  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:01.995424  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:01.995502  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:02.033155  459274 cri.go:89] found id: ""
	I1202 22:03:02.033180  459274 logs.go:282] 0 containers: []
	W1202 22:03:02.033189  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:02.033195  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:02.033256  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:02.099935  459274 cri.go:89] found id: ""
	I1202 22:03:02.099960  459274 logs.go:282] 0 containers: []
	W1202 22:03:02.099970  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:02.099984  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:02.099996  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:02.138941  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:02.138978  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:02.165735  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:02.165771  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:02.229997  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:02.230048  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:02.303231  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:02.303274  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:02.351969  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:02.352006  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:02.407968  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:02.407996  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:02.484963  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:02.484999  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:02.609985  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:02.610007  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:02.610021  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:05.181755  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:05.200874  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:05.200945  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:05.252360  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:05.252378  459274 cri.go:89] found id: ""
	I1202 22:03:05.252386  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:05.252442  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:05.257111  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:05.257179  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:05.303813  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:05.303832  459274 cri.go:89] found id: ""
	I1202 22:03:05.303899  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:05.303961  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:05.308868  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:05.308934  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:05.338846  459274 cri.go:89] found id: ""
	I1202 22:03:05.338868  459274 logs.go:282] 0 containers: []
	W1202 22:03:05.338877  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:05.338883  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:05.338944  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:05.371232  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:05.371251  459274 cri.go:89] found id: ""
	I1202 22:03:05.371259  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:05.371332  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:05.376861  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:05.376931  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:05.404200  459274 cri.go:89] found id: ""
	I1202 22:03:05.404222  459274 logs.go:282] 0 containers: []
	W1202 22:03:05.404231  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:05.404238  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:05.404296  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:05.444852  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:05.444870  459274 cri.go:89] found id: ""
	I1202 22:03:05.444879  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:05.444932  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:05.449724  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:05.449827  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:05.482661  459274 cri.go:89] found id: ""
	I1202 22:03:05.482698  459274 logs.go:282] 0 containers: []
	W1202 22:03:05.482708  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:05.482715  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:05.482779  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:05.527299  459274 cri.go:89] found id: ""
	I1202 22:03:05.527319  459274 logs.go:282] 0 containers: []
	W1202 22:03:05.527327  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:05.527339  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:05.527350  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:05.572194  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:05.575979  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:05.661476  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:05.661550  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:05.757165  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:05.757183  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:05.757196  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:05.822767  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:05.822794  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:05.900353  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:05.900423  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:05.957513  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:05.957539  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:05.986566  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:05.986600  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:06.043301  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:06.043382  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:08.584297  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:08.602635  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:08.602706  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:08.641035  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:08.641054  459274 cri.go:89] found id: ""
	I1202 22:03:08.641062  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:08.641132  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:08.645565  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:08.645637  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:08.683684  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:08.683701  459274 cri.go:89] found id: ""
	I1202 22:03:08.683709  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:08.683764  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:08.689318  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:08.689399  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:08.722827  459274 cri.go:89] found id: ""
	I1202 22:03:08.722856  459274 logs.go:282] 0 containers: []
	W1202 22:03:08.722865  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:08.722872  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:08.722942  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:08.755012  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:08.755034  459274 cri.go:89] found id: ""
	I1202 22:03:08.755042  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:08.755100  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:08.759029  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:08.759099  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:08.798737  459274 cri.go:89] found id: ""
	I1202 22:03:08.798759  459274 logs.go:282] 0 containers: []
	W1202 22:03:08.798768  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:08.798774  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:08.798846  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:08.848083  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:08.848114  459274 cri.go:89] found id: ""
	I1202 22:03:08.848122  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:08.848230  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:08.852266  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:08.852434  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:08.898714  459274 cri.go:89] found id: ""
	I1202 22:03:08.898798  459274 logs.go:282] 0 containers: []
	W1202 22:03:08.898822  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:08.898842  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:08.898933  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:08.928408  459274 cri.go:89] found id: ""
	I1202 22:03:08.928501  459274 logs.go:282] 0 containers: []
	W1202 22:03:08.928533  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:08.928577  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:08.928613  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:08.968610  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:08.968667  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:08.999146  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:08.999435  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:09.037150  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:09.037221  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:09.075533  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:09.075608  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:09.148800  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:09.148875  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:09.168745  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:09.168773  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:09.258387  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:09.258410  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:09.258423  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:09.339433  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:09.339468  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:11.921757  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:11.938331  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:11.938411  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:11.992948  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:11.992971  459274 cri.go:89] found id: ""
	I1202 22:03:11.992980  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:11.993035  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:11.997333  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:11.997417  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:12.049462  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:12.049495  459274 cri.go:89] found id: ""
	I1202 22:03:12.049504  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:12.049570  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:12.058215  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:12.058341  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:12.128044  459274 cri.go:89] found id: ""
	I1202 22:03:12.128075  459274 logs.go:282] 0 containers: []
	W1202 22:03:12.128084  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:12.128092  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:12.128156  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:12.170960  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:12.170991  459274 cri.go:89] found id: ""
	I1202 22:03:12.171000  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:12.171054  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:12.175537  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:12.175624  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:12.222944  459274 cri.go:89] found id: ""
	I1202 22:03:12.222979  459274 logs.go:282] 0 containers: []
	W1202 22:03:12.222989  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:12.222995  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:12.223061  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:12.263218  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:12.263257  459274 cri.go:89] found id: ""
	I1202 22:03:12.263272  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:12.263340  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:12.267733  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:12.267817  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:12.323039  459274 cri.go:89] found id: ""
	I1202 22:03:12.323079  459274 logs.go:282] 0 containers: []
	W1202 22:03:12.323087  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:12.323094  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:12.323160  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:12.370976  459274 cri.go:89] found id: ""
	I1202 22:03:12.371016  459274 logs.go:282] 0 containers: []
	W1202 22:03:12.371025  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:12.371039  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:12.371050  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:12.458787  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:12.458826  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:12.476433  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:12.476489  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:12.532507  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:12.532543  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:12.609023  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:12.609104  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:12.686895  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:12.686935  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:12.742664  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:12.742706  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:12.791842  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:12.791878  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:12.915701  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:12.915724  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:12.915736  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:15.451083  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:15.461287  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:15.461356  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:15.487216  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:15.487238  459274 cri.go:89] found id: ""
	I1202 22:03:15.487246  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:15.487304  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:15.491453  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:15.491578  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:15.518239  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:15.518269  459274 cri.go:89] found id: ""
	I1202 22:03:15.518277  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:15.518335  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:15.522353  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:15.522432  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:15.548516  459274 cri.go:89] found id: ""
	I1202 22:03:15.548546  459274 logs.go:282] 0 containers: []
	W1202 22:03:15.548554  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:15.548561  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:15.548623  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:15.577604  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:15.577628  459274 cri.go:89] found id: ""
	I1202 22:03:15.577636  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:15.577715  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:15.581924  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:15.582003  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:15.612876  459274 cri.go:89] found id: ""
	I1202 22:03:15.612904  459274 logs.go:282] 0 containers: []
	W1202 22:03:15.612913  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:15.612919  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:15.612980  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:15.645477  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:15.645500  459274 cri.go:89] found id: ""
	I1202 22:03:15.645508  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:15.645563  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:15.649552  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:15.649622  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:15.676101  459274 cri.go:89] found id: ""
	I1202 22:03:15.676172  459274 logs.go:282] 0 containers: []
	W1202 22:03:15.676195  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:15.676215  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:15.676298  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:15.704145  459274 cri.go:89] found id: ""
	I1202 22:03:15.704212  459274 logs.go:282] 0 containers: []
	W1202 22:03:15.704236  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:15.704265  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:15.704297  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:15.763982  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:15.764020  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:15.794059  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:15.794087  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:15.831237  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:15.831305  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:15.848184  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:15.848218  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:15.913948  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:15.913970  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:15.913983  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:15.947765  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:15.947795  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:15.982276  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:15.982306  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:16.023918  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:16.023949  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:18.557909  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:18.569499  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:18.569565  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:18.596330  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:18.596349  459274 cri.go:89] found id: ""
	I1202 22:03:18.596357  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:18.596416  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:18.601025  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:18.601094  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:18.629793  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:18.629812  459274 cri.go:89] found id: ""
	I1202 22:03:18.629821  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:18.629877  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:18.633573  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:18.633647  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:18.658961  459274 cri.go:89] found id: ""
	I1202 22:03:18.658984  459274 logs.go:282] 0 containers: []
	W1202 22:03:18.658993  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:18.658999  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:18.659080  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:18.683554  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:18.683576  459274 cri.go:89] found id: ""
	I1202 22:03:18.683584  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:18.683640  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:18.687443  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:18.687513  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:18.714019  459274 cri.go:89] found id: ""
	I1202 22:03:18.714048  459274 logs.go:282] 0 containers: []
	W1202 22:03:18.714057  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:18.714064  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:18.714125  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:18.746344  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:18.746367  459274 cri.go:89] found id: ""
	I1202 22:03:18.746375  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:18.746453  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:18.750546  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:18.750617  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:18.775075  459274 cri.go:89] found id: ""
	I1202 22:03:18.775097  459274 logs.go:282] 0 containers: []
	W1202 22:03:18.775106  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:18.775112  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:18.775170  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:18.799768  459274 cri.go:89] found id: ""
	I1202 22:03:18.799791  459274 logs.go:282] 0 containers: []
	W1202 22:03:18.799799  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:18.799812  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:18.799824  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:18.816874  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:18.816962  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:18.853838  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:18.853871  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:18.886379  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:18.886412  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:18.948745  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:18.948782  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:19.019910  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:19.019932  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:19.019948  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:19.060130  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:19.060170  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:19.104610  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:19.104645  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:19.133211  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:19.133243  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:21.662858  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:21.674216  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:21.674281  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:21.711762  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:21.711782  459274 cri.go:89] found id: ""
	I1202 22:03:21.711790  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:21.711856  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:21.716668  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:21.716738  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:21.756224  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:21.756242  459274 cri.go:89] found id: ""
	I1202 22:03:21.756250  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:21.756305  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:21.760792  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:21.760865  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:21.792175  459274 cri.go:89] found id: ""
	I1202 22:03:21.792197  459274 logs.go:282] 0 containers: []
	W1202 22:03:21.792206  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:21.792213  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:21.792270  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:21.827417  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:21.827435  459274 cri.go:89] found id: ""
	I1202 22:03:21.827443  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:21.827498  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:21.831991  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:21.832109  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:21.877529  459274 cri.go:89] found id: ""
	I1202 22:03:21.877604  459274 logs.go:282] 0 containers: []
	W1202 22:03:21.877638  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:21.877694  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:21.877790  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:21.909134  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:21.909159  459274 cri.go:89] found id: ""
	I1202 22:03:21.909170  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:21.909272  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:21.913635  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:21.913756  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:21.947222  459274 cri.go:89] found id: ""
	I1202 22:03:21.947290  459274 logs.go:282] 0 containers: []
	W1202 22:03:21.947329  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:21.947353  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:21.947447  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:21.977144  459274 cri.go:89] found id: ""
	I1202 22:03:21.977226  459274 logs.go:282] 0 containers: []
	W1202 22:03:21.977249  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:21.977293  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:21.977321  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:22.013003  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:22.013081  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:22.031237  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:22.031275  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:22.115768  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:22.115785  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:22.115804  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:22.154302  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:22.154333  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:22.182851  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:22.182879  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:22.222297  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:22.222373  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:22.294424  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:22.294499  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:22.403178  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:22.403250  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:24.982137  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:24.994681  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:24.994750  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:25.041035  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:25.041061  459274 cri.go:89] found id: ""
	I1202 22:03:25.041069  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:25.041127  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:25.045501  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:25.045577  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:25.074061  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:25.074082  459274 cri.go:89] found id: ""
	I1202 22:03:25.074091  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:25.074147  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:25.078653  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:25.078725  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:25.112952  459274 cri.go:89] found id: ""
	I1202 22:03:25.112974  459274 logs.go:282] 0 containers: []
	W1202 22:03:25.112985  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:25.112991  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:25.113049  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:25.141161  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:25.141180  459274 cri.go:89] found id: ""
	I1202 22:03:25.141189  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:25.141251  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:25.145286  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:25.145368  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:25.171898  459274 cri.go:89] found id: ""
	I1202 22:03:25.171923  459274 logs.go:282] 0 containers: []
	W1202 22:03:25.171931  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:25.171938  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:25.172020  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:25.198034  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:25.198082  459274 cri.go:89] found id: ""
	I1202 22:03:25.198090  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:25.198179  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:25.202799  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:25.202903  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:25.231491  459274 cri.go:89] found id: ""
	I1202 22:03:25.231514  459274 logs.go:282] 0 containers: []
	W1202 22:03:25.231523  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:25.231530  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:25.231615  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:25.256596  459274 cri.go:89] found id: ""
	I1202 22:03:25.256620  459274 logs.go:282] 0 containers: []
	W1202 22:03:25.256628  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:25.256642  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:25.256683  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:25.317867  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:25.317947  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:25.391204  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:25.391264  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:25.391284  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:25.429849  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:25.429878  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:25.476996  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:25.477026  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:25.511938  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:25.511967  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:25.528748  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:25.528775  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:25.561132  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:25.561165  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:25.595761  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:25.595818  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:28.143929  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:28.153901  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:28.153968  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:28.189570  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:28.189588  459274 cri.go:89] found id: ""
	I1202 22:03:28.189597  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:28.189683  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:28.193960  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:28.194026  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:28.236253  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:28.236273  459274 cri.go:89] found id: ""
	I1202 22:03:28.236281  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:28.236337  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:28.246146  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:28.246223  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:28.274260  459274 cri.go:89] found id: ""
	I1202 22:03:28.274283  459274 logs.go:282] 0 containers: []
	W1202 22:03:28.274291  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:28.274300  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:28.274360  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:28.312964  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:28.312986  459274 cri.go:89] found id: ""
	I1202 22:03:28.312994  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:28.313051  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:28.317599  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:28.317688  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:28.352192  459274 cri.go:89] found id: ""
	I1202 22:03:28.352218  459274 logs.go:282] 0 containers: []
	W1202 22:03:28.352227  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:28.352234  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:28.352297  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:28.400699  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:28.400720  459274 cri.go:89] found id: ""
	I1202 22:03:28.400729  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:28.400784  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:28.404954  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:28.405024  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:28.449559  459274 cri.go:89] found id: ""
	I1202 22:03:28.449583  459274 logs.go:282] 0 containers: []
	W1202 22:03:28.449591  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:28.449609  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:28.449694  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:28.490303  459274 cri.go:89] found id: ""
	I1202 22:03:28.490326  459274 logs.go:282] 0 containers: []
	W1202 22:03:28.490334  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:28.490348  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:28.490359  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:28.545930  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:28.545964  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:28.581537  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:28.581566  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:28.622619  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:28.622647  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:28.685509  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:28.685544  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:28.713392  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:28.713423  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:28.755053  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:28.755083  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:28.791121  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:28.791160  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:28.831813  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:28.831844  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:28.917450  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:31.418458  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:31.433313  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:31.433386  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:31.466397  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:31.466418  459274 cri.go:89] found id: ""
	I1202 22:03:31.466426  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:31.466492  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:31.472644  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:31.472713  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:31.509474  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:31.509539  459274 cri.go:89] found id: ""
	I1202 22:03:31.509551  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:31.509607  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:31.514231  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:31.514354  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:31.548757  459274 cri.go:89] found id: ""
	I1202 22:03:31.548834  459274 logs.go:282] 0 containers: []
	W1202 22:03:31.548858  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:31.548878  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:31.548978  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:31.590718  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:31.590788  459274 cri.go:89] found id: ""
	I1202 22:03:31.590810  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:31.590900  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:31.595178  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:31.595297  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:31.628052  459274 cri.go:89] found id: ""
	I1202 22:03:31.628126  459274 logs.go:282] 0 containers: []
	W1202 22:03:31.628149  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:31.628169  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:31.628259  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:31.668209  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:31.668282  459274 cri.go:89] found id: ""
	I1202 22:03:31.668304  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:31.668395  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:31.672723  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:31.672845  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:31.716316  459274 cri.go:89] found id: ""
	I1202 22:03:31.716388  459274 logs.go:282] 0 containers: []
	W1202 22:03:31.716410  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:31.716430  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:31.716521  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:31.751378  459274 cri.go:89] found id: ""
	I1202 22:03:31.751452  459274 logs.go:282] 0 containers: []
	W1202 22:03:31.751474  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:31.751501  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:31.751538  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:31.784744  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:31.784823  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:31.826946  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:31.826978  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:31.844356  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:31.844425  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:31.933098  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:31.933160  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:31.933186  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:31.998102  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:31.998145  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:32.098110  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:32.098141  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:32.205053  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:32.205089  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:32.252728  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:32.252757  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:34.813932  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:34.824783  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:34.824847  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:34.862593  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:34.862611  459274 cri.go:89] found id: ""
	I1202 22:03:34.862620  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:34.862672  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:34.867417  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:34.867490  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:34.895975  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:34.895993  459274 cri.go:89] found id: ""
	I1202 22:03:34.896001  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:34.896053  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:34.900501  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:34.900572  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:34.934821  459274 cri.go:89] found id: ""
	I1202 22:03:34.934843  459274 logs.go:282] 0 containers: []
	W1202 22:03:34.934852  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:34.934858  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:34.934921  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:34.974314  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:34.974338  459274 cri.go:89] found id: ""
	I1202 22:03:34.974347  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:34.974405  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:34.981467  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:34.981588  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:35.052754  459274 cri.go:89] found id: ""
	I1202 22:03:35.052780  459274 logs.go:282] 0 containers: []
	W1202 22:03:35.052788  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:35.052794  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:35.052857  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:35.136323  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:35.136346  459274 cri.go:89] found id: ""
	I1202 22:03:35.136355  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:35.136412  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:35.141809  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:35.141898  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:35.174593  459274 cri.go:89] found id: ""
	I1202 22:03:35.174620  459274 logs.go:282] 0 containers: []
	W1202 22:03:35.174628  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:35.174635  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:35.174693  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:35.203984  459274 cri.go:89] found id: ""
	I1202 22:03:35.204012  459274 logs.go:282] 0 containers: []
	W1202 22:03:35.204021  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:35.204035  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:35.204045  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:35.270277  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:35.270366  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:35.287392  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:35.287465  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:35.436405  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:35.436475  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:35.436502  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:35.487730  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:35.487769  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:35.542184  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:35.542231  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:35.612789  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:35.612866  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:35.672334  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:35.672370  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:35.711929  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:35.711958  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:38.254414  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:38.264551  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:38.264624  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:38.289878  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:38.289906  459274 cri.go:89] found id: ""
	I1202 22:03:38.289915  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:38.289972  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:38.294495  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:38.294605  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:38.322983  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:38.323006  459274 cri.go:89] found id: ""
	I1202 22:03:38.323015  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:38.323070  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:38.329982  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:38.330067  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:38.359088  459274 cri.go:89] found id: ""
	I1202 22:03:38.359114  459274 logs.go:282] 0 containers: []
	W1202 22:03:38.359122  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:38.359129  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:38.359186  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:38.396092  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:38.396115  459274 cri.go:89] found id: ""
	I1202 22:03:38.396124  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:38.396180  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:38.400403  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:38.400502  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:38.428161  459274 cri.go:89] found id: ""
	I1202 22:03:38.428186  459274 logs.go:282] 0 containers: []
	W1202 22:03:38.428194  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:38.428201  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:38.428257  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:38.460410  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:38.460431  459274 cri.go:89] found id: ""
	I1202 22:03:38.460439  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:38.460499  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:38.464736  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:38.464830  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:38.491374  459274 cri.go:89] found id: ""
	I1202 22:03:38.491455  459274 logs.go:282] 0 containers: []
	W1202 22:03:38.491471  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:38.491478  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:38.491554  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:38.520280  459274 cri.go:89] found id: ""
	I1202 22:03:38.520311  459274 logs.go:282] 0 containers: []
	W1202 22:03:38.520320  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:38.520350  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:38.520370  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:38.587606  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:38.587640  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:38.674719  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:38.674739  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:38.674758  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:38.727767  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:38.727797  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:38.768951  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:38.768984  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:38.809391  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:38.809418  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:38.844202  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:38.844234  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:38.895727  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:38.895754  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:38.919547  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:38.919576  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:41.461205  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:41.477829  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:41.477925  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:41.504588  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:41.504612  459274 cri.go:89] found id: ""
	I1202 22:03:41.504621  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:41.504703  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:41.509252  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:41.509356  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:41.544533  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:41.544553  459274 cri.go:89] found id: ""
	I1202 22:03:41.544562  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:41.544652  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:41.549001  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:41.549108  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:41.586800  459274 cri.go:89] found id: ""
	I1202 22:03:41.586827  459274 logs.go:282] 0 containers: []
	W1202 22:03:41.586836  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:41.586843  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:41.586957  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:41.624716  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:41.624741  459274 cri.go:89] found id: ""
	I1202 22:03:41.624749  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:41.624835  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:41.629986  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:41.630088  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:41.658667  459274 cri.go:89] found id: ""
	I1202 22:03:41.658703  459274 logs.go:282] 0 containers: []
	W1202 22:03:41.658713  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:41.658737  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:41.658817  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:41.705847  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:41.705871  459274 cri.go:89] found id: ""
	I1202 22:03:41.705879  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:41.705934  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:41.710041  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:41.710123  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:41.749635  459274 cri.go:89] found id: ""
	I1202 22:03:41.749684  459274 logs.go:282] 0 containers: []
	W1202 22:03:41.749693  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:41.749700  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:41.749760  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:41.786482  459274 cri.go:89] found id: ""
	I1202 22:03:41.786505  459274 logs.go:282] 0 containers: []
	W1202 22:03:41.786514  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:41.786527  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:41.786540  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:41.849182  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:41.849219  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:41.865856  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:41.865886  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:41.938708  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:41.938729  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:41.938743  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:41.980262  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:41.980294  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:42.021398  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:42.021528  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:42.075121  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:42.075152  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:42.192665  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:42.192719  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:42.273370  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:42.273405  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:44.809852  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:44.821423  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:44.821494  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:44.849444  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:44.849466  459274 cri.go:89] found id: ""
	I1202 22:03:44.849474  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:44.849530  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:44.853395  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:44.853469  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:44.881244  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:44.881263  459274 cri.go:89] found id: ""
	I1202 22:03:44.881271  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:44.881325  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:44.885602  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:44.885829  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:44.911936  459274 cri.go:89] found id: ""
	I1202 22:03:44.911957  459274 logs.go:282] 0 containers: []
	W1202 22:03:44.911966  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:44.911972  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:44.912029  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:44.938481  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:44.938500  459274 cri.go:89] found id: ""
	I1202 22:03:44.938508  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:44.938579  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:44.943040  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:44.943113  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:44.978539  459274 cri.go:89] found id: ""
	I1202 22:03:44.978561  459274 logs.go:282] 0 containers: []
	W1202 22:03:44.978569  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:44.978575  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:44.978634  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:45.016111  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:45.016195  459274 cri.go:89] found id: ""
	I1202 22:03:45.016220  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:45.016342  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:45.022176  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:45.022341  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:45.114563  459274 cri.go:89] found id: ""
	I1202 22:03:45.114672  459274 logs.go:282] 0 containers: []
	W1202 22:03:45.114703  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:45.114787  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:45.114901  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:45.212307  459274 cri.go:89] found id: ""
	I1202 22:03:45.212398  459274 logs.go:282] 0 containers: []
	W1202 22:03:45.212426  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:45.212483  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:45.212517  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:45.281715  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:45.281799  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:45.329398  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:45.329459  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:45.362326  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:45.362356  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:45.380751  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:45.380778  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:45.451195  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:45.451225  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:45.451255  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:45.486130  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:45.486166  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:45.520294  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:45.520328  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:45.553390  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:45.553423  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:48.113771  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:48.130649  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:48.130719  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:48.167368  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:48.167393  459274 cri.go:89] found id: ""
	I1202 22:03:48.167401  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:48.167502  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:48.171718  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:48.171790  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:48.198200  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:48.198224  459274 cri.go:89] found id: ""
	I1202 22:03:48.198233  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:48.198294  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:48.202314  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:48.202404  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:48.230670  459274 cri.go:89] found id: ""
	I1202 22:03:48.230691  459274 logs.go:282] 0 containers: []
	W1202 22:03:48.230700  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:48.230706  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:48.230762  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:48.256056  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:48.256074  459274 cri.go:89] found id: ""
	I1202 22:03:48.256082  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:48.256135  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:48.260011  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:48.260088  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:48.326198  459274 cri.go:89] found id: ""
	I1202 22:03:48.326243  459274 logs.go:282] 0 containers: []
	W1202 22:03:48.326252  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:48.326258  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:48.326333  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:48.383467  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:48.383488  459274 cri.go:89] found id: ""
	I1202 22:03:48.383496  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:48.383564  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:48.390490  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:48.390588  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:48.429873  459274 cri.go:89] found id: ""
	I1202 22:03:48.429894  459274 logs.go:282] 0 containers: []
	W1202 22:03:48.429902  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:48.429908  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:48.429964  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:48.456006  459274 cri.go:89] found id: ""
	I1202 22:03:48.456027  459274 logs.go:282] 0 containers: []
	W1202 22:03:48.456036  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:48.456051  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:48.456064  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:48.473034  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:48.473119  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:48.574259  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:48.574277  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:48.574290  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:48.616632  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:48.616717  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:48.659729  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:48.659775  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:48.711472  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:48.711502  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:48.778906  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:48.778940  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:48.816107  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:48.816140  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:48.865327  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:48.865410  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:51.405814  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:51.416994  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:51.417059  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:51.466155  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:51.466173  459274 cri.go:89] found id: ""
	I1202 22:03:51.466181  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:51.466241  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:51.474232  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:51.474305  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:51.502823  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:51.502842  459274 cri.go:89] found id: ""
	I1202 22:03:51.502850  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:51.502907  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:51.507484  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:51.507607  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:51.535118  459274 cri.go:89] found id: ""
	I1202 22:03:51.535140  459274 logs.go:282] 0 containers: []
	W1202 22:03:51.535148  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:51.535155  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:51.535212  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:51.562946  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:51.562963  459274 cri.go:89] found id: ""
	I1202 22:03:51.562971  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:51.563024  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:51.567514  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:51.567581  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:51.594981  459274 cri.go:89] found id: ""
	I1202 22:03:51.595001  459274 logs.go:282] 0 containers: []
	W1202 22:03:51.595009  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:51.595019  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:51.595076  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:51.621399  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:51.621473  459274 cri.go:89] found id: ""
	I1202 22:03:51.621495  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:51.621582  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:51.626220  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:51.626343  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:51.663863  459274 cri.go:89] found id: ""
	I1202 22:03:51.663924  459274 logs.go:282] 0 containers: []
	W1202 22:03:51.663956  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:51.663978  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:51.664087  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:51.700099  459274 cri.go:89] found id: ""
	I1202 22:03:51.700171  459274 logs.go:282] 0 containers: []
	W1202 22:03:51.700194  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:51.700237  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:51.700268  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:51.717009  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:51.717039  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:51.806937  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:51.806959  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:51.806972  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:51.877086  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:51.877119  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:51.912562  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:51.912643  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:51.956513  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:51.956587  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:52.016898  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:52.016933  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:52.088915  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:52.088951  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:52.143765  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:52.143800  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:54.682652  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:54.694547  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:54.694624  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:54.732168  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:54.732191  459274 cri.go:89] found id: ""
	I1202 22:03:54.732200  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:54.732262  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:54.736782  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:54.736857  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:54.772173  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:54.772194  459274 cri.go:89] found id: ""
	I1202 22:03:54.772203  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:54.772271  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:54.777224  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:54.777300  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:54.834936  459274 cri.go:89] found id: ""
	I1202 22:03:54.834961  459274 logs.go:282] 0 containers: []
	W1202 22:03:54.834970  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:54.834977  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:54.835086  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:54.878129  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:54.878151  459274 cri.go:89] found id: ""
	I1202 22:03:54.878160  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:54.878216  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:54.883225  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:54.883294  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:54.911896  459274 cri.go:89] found id: ""
	I1202 22:03:54.911925  459274 logs.go:282] 0 containers: []
	W1202 22:03:54.911939  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:54.911945  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:54.912004  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:54.946916  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:54.946942  459274 cri.go:89] found id: ""
	I1202 22:03:54.946954  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:54.947017  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:54.951065  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:54.951140  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:54.976366  459274 cri.go:89] found id: ""
	I1202 22:03:54.976393  459274 logs.go:282] 0 containers: []
	W1202 22:03:54.976401  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:54.976408  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:54.976515  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:55.007975  459274 cri.go:89] found id: ""
	I1202 22:03:55.008004  459274 logs.go:282] 0 containers: []
	W1202 22:03:55.008013  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:55.008033  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:55.008045  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:55.045527  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:55.045553  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:55.111415  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:55.111454  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:55.201327  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:55.201349  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:55.201366  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:55.234537  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:55.234618  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:55.269068  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:55.269143  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:55.287476  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:55.287623  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:55.355292  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:55.355364  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:55.411761  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:55.411836  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:57.962592  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:03:57.975505  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:03:57.975596  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:03:58.013228  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:58.013264  459274 cri.go:89] found id: ""
	I1202 22:03:58.013273  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:03:58.013332  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:58.018789  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:03:58.018866  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:03:58.053816  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:58.053841  459274 cri.go:89] found id: ""
	I1202 22:03:58.053850  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:03:58.053913  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:58.059259  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:03:58.059350  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:03:58.105856  459274 cri.go:89] found id: ""
	I1202 22:03:58.105888  459274 logs.go:282] 0 containers: []
	W1202 22:03:58.105902  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:03:58.105914  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:03:58.106000  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:03:58.146330  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:58.146360  459274 cri.go:89] found id: ""
	I1202 22:03:58.146368  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:03:58.146434  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:58.151802  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:03:58.151895  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:03:58.190095  459274 cri.go:89] found id: ""
	I1202 22:03:58.190128  459274 logs.go:282] 0 containers: []
	W1202 22:03:58.190142  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:03:58.190158  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:03:58.190239  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:03:58.229224  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:03:58.229253  459274 cri.go:89] found id: ""
	I1202 22:03:58.229262  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:03:58.229324  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:03:58.234485  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:03:58.234578  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:03:58.272771  459274 cri.go:89] found id: ""
	I1202 22:03:58.272808  459274 logs.go:282] 0 containers: []
	W1202 22:03:58.272823  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:03:58.272831  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:03:58.272917  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:03:58.318766  459274 cri.go:89] found id: ""
	I1202 22:03:58.318788  459274 logs.go:282] 0 containers: []
	W1202 22:03:58.318806  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:03:58.318820  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:03:58.318831  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:03:58.388231  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:03:58.388279  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:03:58.444201  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:03:58.444234  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:03:58.488067  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:03:58.488099  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:03:58.523071  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:03:58.523104  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:03:58.574230  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:03:58.574255  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:03:58.597790  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:03:58.597864  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:03:58.668589  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:03:58.668609  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:03:58.668622  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:03:58.704799  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:03:58.704834  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:01.244018  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:01.258046  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:01.258139  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:01.301886  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:01.301906  459274 cri.go:89] found id: ""
	I1202 22:04:01.301913  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:01.301974  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:01.306210  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:01.306280  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:01.335799  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:01.335817  459274 cri.go:89] found id: ""
	I1202 22:04:01.335825  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:01.335901  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:01.342197  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:01.342315  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:01.380075  459274 cri.go:89] found id: ""
	I1202 22:04:01.380139  459274 logs.go:282] 0 containers: []
	W1202 22:04:01.380161  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:01.380181  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:01.380267  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:01.421701  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:01.421808  459274 cri.go:89] found id: ""
	I1202 22:04:01.421830  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:01.421956  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:01.427033  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:01.427172  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:01.474440  459274 cri.go:89] found id: ""
	I1202 22:04:01.474506  459274 logs.go:282] 0 containers: []
	W1202 22:04:01.474536  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:01.474561  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:01.474661  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:01.527794  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:01.527856  459274 cri.go:89] found id: ""
	I1202 22:04:01.527880  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:01.527974  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:01.532481  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:01.532601  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:01.584004  459274 cri.go:89] found id: ""
	I1202 22:04:01.584093  459274 logs.go:282] 0 containers: []
	W1202 22:04:01.584123  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:01.584159  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:01.584243  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:01.653620  459274 cri.go:89] found id: ""
	I1202 22:04:01.653708  459274 logs.go:282] 0 containers: []
	W1202 22:04:01.653733  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:01.653775  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:01.653804  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:01.751565  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:01.751683  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:01.784903  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:01.784973  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:01.882580  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:01.882902  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:02.033031  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:02.033092  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:02.033121  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:02.103955  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:02.104029  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:02.154197  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:02.154267  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:02.221649  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:02.221730  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:02.277314  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:02.277391  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:04.845786  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:04.855780  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:04.855891  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:04.886471  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:04.886493  459274 cri.go:89] found id: ""
	I1202 22:04:04.886502  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:04.886556  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:04.894898  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:04.895026  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:04.927100  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:04.927172  459274 cri.go:89] found id: ""
	I1202 22:04:04.927195  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:04.927274  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:04.932468  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:04.932609  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:04.975793  459274 cri.go:89] found id: ""
	I1202 22:04:04.975875  459274 logs.go:282] 0 containers: []
	W1202 22:04:04.975900  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:04.975921  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:04.976026  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:05.006762  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:05.006808  459274 cri.go:89] found id: ""
	I1202 22:04:05.006819  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:05.006925  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:05.012779  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:05.012934  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:05.058521  459274 cri.go:89] found id: ""
	I1202 22:04:05.058592  459274 logs.go:282] 0 containers: []
	W1202 22:04:05.058616  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:05.058638  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:05.058735  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:05.090934  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:05.091032  459274 cri.go:89] found id: ""
	I1202 22:04:05.091068  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:05.091170  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:05.095757  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:05.095883  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:05.124152  459274 cri.go:89] found id: ""
	I1202 22:04:05.124226  459274 logs.go:282] 0 containers: []
	W1202 22:04:05.124249  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:05.124270  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:05.124353  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:05.154709  459274 cri.go:89] found id: ""
	I1202 22:04:05.154788  459274 logs.go:282] 0 containers: []
	W1202 22:04:05.154822  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:05.154852  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:05.154893  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:05.194603  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:05.194674  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:05.236145  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:05.236217  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:05.286159  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:05.286253  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:05.327913  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:05.327989  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:05.398386  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:05.398470  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:05.418119  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:05.418331  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:05.508184  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:05.508377  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:05.508424  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:05.547245  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:05.547429  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:08.145779  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:08.157290  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:08.157362  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:08.205369  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:08.205391  459274 cri.go:89] found id: ""
	I1202 22:04:08.205399  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:08.205451  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:08.209986  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:08.210127  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:08.250615  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:08.250674  459274 cri.go:89] found id: ""
	I1202 22:04:08.250706  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:08.250796  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:08.255098  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:08.255217  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:08.300375  459274 cri.go:89] found id: ""
	I1202 22:04:08.300448  459274 logs.go:282] 0 containers: []
	W1202 22:04:08.300470  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:08.300490  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:08.300599  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:08.367531  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:08.367601  459274 cri.go:89] found id: ""
	I1202 22:04:08.367632  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:08.367718  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:08.372568  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:08.372688  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:08.406323  459274 cri.go:89] found id: ""
	I1202 22:04:08.406345  459274 logs.go:282] 0 containers: []
	W1202 22:04:08.406353  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:08.406360  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:08.406426  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:08.438274  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:08.438350  459274 cri.go:89] found id: ""
	I1202 22:04:08.438387  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:08.438467  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:08.442699  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:08.442796  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:08.470994  459274 cri.go:89] found id: ""
	I1202 22:04:08.471020  459274 logs.go:282] 0 containers: []
	W1202 22:04:08.471029  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:08.471041  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:08.471112  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:08.502947  459274 cri.go:89] found id: ""
	I1202 22:04:08.502974  459274 logs.go:282] 0 containers: []
	W1202 22:04:08.502983  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:08.502998  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:08.503010  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:08.542078  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:08.542107  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:08.592789  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:08.592860  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:08.657494  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:08.657530  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:08.674715  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:08.674742  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:08.753127  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:08.753149  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:08.753166  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:08.801391  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:08.801425  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:08.848809  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:08.848839  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:08.893153  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:08.893194  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:11.456435  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:11.467204  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:11.467327  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:11.494126  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:11.494157  459274 cri.go:89] found id: ""
	I1202 22:04:11.494166  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:11.494234  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:11.498537  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:11.498615  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:11.526730  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:11.526752  459274 cri.go:89] found id: ""
	I1202 22:04:11.526760  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:11.526832  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:11.531541  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:11.531616  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:11.567899  459274 cri.go:89] found id: ""
	I1202 22:04:11.567926  459274 logs.go:282] 0 containers: []
	W1202 22:04:11.567946  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:11.567953  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:11.568015  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:11.607435  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:11.607459  459274 cri.go:89] found id: ""
	I1202 22:04:11.607468  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:11.607526  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:11.614696  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:11.614773  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:11.642745  459274 cri.go:89] found id: ""
	I1202 22:04:11.642771  459274 logs.go:282] 0 containers: []
	W1202 22:04:11.642779  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:11.642786  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:11.642848  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:11.673183  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:11.673205  459274 cri.go:89] found id: ""
	I1202 22:04:11.673214  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:11.673268  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:11.677345  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:11.677417  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:11.703803  459274 cri.go:89] found id: ""
	I1202 22:04:11.703828  459274 logs.go:282] 0 containers: []
	W1202 22:04:11.703836  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:11.703842  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:11.703900  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:11.738169  459274 cri.go:89] found id: ""
	I1202 22:04:11.738194  459274 logs.go:282] 0 containers: []
	W1202 22:04:11.738203  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:11.738218  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:11.738229  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:11.823410  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:11.823450  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:11.848371  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:11.848444  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:11.893173  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:11.893210  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:11.927419  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:11.927454  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:12.001453  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:12.001476  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:12.001490  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:12.073820  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:12.073861  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:12.131779  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:12.131809  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:12.173110  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:12.173146  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:14.707250  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:14.718809  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:14.718875  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:14.751910  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:14.751929  459274 cri.go:89] found id: ""
	I1202 22:04:14.751938  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:14.752004  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:14.756387  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:14.756462  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:14.783645  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:14.783666  459274 cri.go:89] found id: ""
	I1202 22:04:14.783674  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:14.783741  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:14.787683  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:14.787754  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:14.822166  459274 cri.go:89] found id: ""
	I1202 22:04:14.822187  459274 logs.go:282] 0 containers: []
	W1202 22:04:14.822195  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:14.822202  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:14.822261  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:14.854166  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:14.854186  459274 cri.go:89] found id: ""
	I1202 22:04:14.854194  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:14.854249  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:14.858439  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:14.858512  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:14.884194  459274 cri.go:89] found id: ""
	I1202 22:04:14.884216  459274 logs.go:282] 0 containers: []
	W1202 22:04:14.884224  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:14.884230  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:14.884287  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:14.910435  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:14.910455  459274 cri.go:89] found id: ""
	I1202 22:04:14.910464  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:14.910520  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:14.914900  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:14.914973  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:14.939191  459274 cri.go:89] found id: ""
	I1202 22:04:14.939214  459274 logs.go:282] 0 containers: []
	W1202 22:04:14.939222  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:14.939229  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:14.939287  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:14.971237  459274 cri.go:89] found id: ""
	I1202 22:04:14.971262  459274 logs.go:282] 0 containers: []
	W1202 22:04:14.971272  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:14.971285  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:14.971296  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:15.006237  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:15.006288  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:15.053216  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:15.053248  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:15.094996  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:15.095026  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:15.139538  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:15.139575  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:15.235102  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:15.235153  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:15.336078  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:15.336152  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:15.336180  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:15.375797  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:15.375836  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:15.408655  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:15.408683  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:17.926525  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:17.936801  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:17.936903  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:17.966762  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:17.966826  459274 cri.go:89] found id: ""
	I1202 22:04:17.966852  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:17.966931  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:17.970889  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:17.971011  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:17.995856  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:17.995879  459274 cri.go:89] found id: ""
	I1202 22:04:17.995887  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:17.995969  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:18.000161  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:18.000263  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:18.039349  459274 cri.go:89] found id: ""
	I1202 22:04:18.039415  459274 logs.go:282] 0 containers: []
	W1202 22:04:18.039440  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:18.039460  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:18.039527  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:18.067052  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:18.067127  459274 cri.go:89] found id: ""
	I1202 22:04:18.067150  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:18.067216  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:18.071483  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:18.071556  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:18.097117  459274 cri.go:89] found id: ""
	I1202 22:04:18.097144  459274 logs.go:282] 0 containers: []
	W1202 22:04:18.097152  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:18.097159  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:18.097223  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:18.130892  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:18.130915  459274 cri.go:89] found id: ""
	I1202 22:04:18.130933  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:18.131041  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:18.135502  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:18.135618  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:18.166674  459274 cri.go:89] found id: ""
	I1202 22:04:18.166751  459274 logs.go:282] 0 containers: []
	W1202 22:04:18.166774  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:18.166786  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:18.166850  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:18.192546  459274 cri.go:89] found id: ""
	I1202 22:04:18.192580  459274 logs.go:282] 0 containers: []
	W1202 22:04:18.192589  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:18.192619  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:18.192637  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:18.251032  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:18.251068  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:18.270000  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:18.270029  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:18.364142  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:18.364162  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:18.364175  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:18.397808  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:18.397847  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:18.436760  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:18.436800  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:18.466154  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:18.466185  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:18.499592  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:18.499630  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:18.535696  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:18.535730  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:21.079318  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:21.090538  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:21.090687  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:21.118644  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:21.118669  459274 cri.go:89] found id: ""
	I1202 22:04:21.118678  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:21.118742  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:21.123085  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:21.123201  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:21.148213  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:21.148237  459274 cri.go:89] found id: ""
	I1202 22:04:21.148246  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:21.148302  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:21.152471  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:21.152590  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:21.177924  459274 cri.go:89] found id: ""
	I1202 22:04:21.177948  459274 logs.go:282] 0 containers: []
	W1202 22:04:21.177957  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:21.177963  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:21.178043  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:21.203632  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:21.203654  459274 cri.go:89] found id: ""
	I1202 22:04:21.203663  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:21.203720  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:21.208104  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:21.208211  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:21.234150  459274 cri.go:89] found id: ""
	I1202 22:04:21.234176  459274 logs.go:282] 0 containers: []
	W1202 22:04:21.234197  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:21.234205  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:21.234291  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:21.259711  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:21.259733  459274 cri.go:89] found id: ""
	I1202 22:04:21.259741  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:21.259796  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:21.263821  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:21.263897  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:21.293539  459274 cri.go:89] found id: ""
	I1202 22:04:21.293564  459274 logs.go:282] 0 containers: []
	W1202 22:04:21.293573  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:21.293579  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:21.293637  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:21.322459  459274 cri.go:89] found id: ""
	I1202 22:04:21.322484  459274 logs.go:282] 0 containers: []
	W1202 22:04:21.322492  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:21.322506  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:21.322519  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:21.368292  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:21.368321  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:21.405687  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:21.405717  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:21.472900  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:21.472920  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:21.472934  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:21.512721  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:21.512813  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:21.559569  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:21.559599  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:21.590942  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:21.590973  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:21.618995  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:21.619022  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:21.679063  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:21.679096  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:24.197551  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:24.208011  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:24.208085  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:24.235510  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:24.235530  459274 cri.go:89] found id: ""
	I1202 22:04:24.235539  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:24.235599  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:24.239668  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:24.239743  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:24.266503  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:24.266526  459274 cri.go:89] found id: ""
	I1202 22:04:24.266535  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:24.266623  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:24.276533  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:24.276626  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:24.316931  459274 cri.go:89] found id: ""
	I1202 22:04:24.316954  459274 logs.go:282] 0 containers: []
	W1202 22:04:24.316962  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:24.316969  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:24.317029  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:24.347992  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:24.348010  459274 cri.go:89] found id: ""
	I1202 22:04:24.348018  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:24.348072  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:24.352908  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:24.352988  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:24.385957  459274 cri.go:89] found id: ""
	I1202 22:04:24.385979  459274 logs.go:282] 0 containers: []
	W1202 22:04:24.385988  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:24.385995  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:24.386052  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:24.412293  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:24.412369  459274 cri.go:89] found id: ""
	I1202 22:04:24.412391  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:24.412486  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:24.416471  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:24.416544  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:24.440802  459274 cri.go:89] found id: ""
	I1202 22:04:24.440823  459274 logs.go:282] 0 containers: []
	W1202 22:04:24.440832  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:24.440839  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:24.440896  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:24.470000  459274 cri.go:89] found id: ""
	I1202 22:04:24.470024  459274 logs.go:282] 0 containers: []
	W1202 22:04:24.470033  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:24.470047  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:24.470065  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:24.486472  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:24.486511  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:24.560899  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:24.560920  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:24.560935  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:24.594077  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:24.594108  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:24.623874  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:24.623902  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:24.652964  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:24.652993  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:24.718886  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:24.718992  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:24.754466  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:24.754552  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:24.798965  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:24.799003  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:27.333822  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:27.350480  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:27.350601  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:27.385992  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:27.386074  459274 cri.go:89] found id: ""
	I1202 22:04:27.386100  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:27.386191  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:27.390525  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:27.390603  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:27.416356  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:27.416381  459274 cri.go:89] found id: ""
	I1202 22:04:27.416389  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:27.416457  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:27.420453  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:27.420593  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:27.453031  459274 cri.go:89] found id: ""
	I1202 22:04:27.453057  459274 logs.go:282] 0 containers: []
	W1202 22:04:27.453065  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:27.453072  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:27.453133  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:27.478382  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:27.478406  459274 cri.go:89] found id: ""
	I1202 22:04:27.478415  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:27.478475  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:27.482630  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:27.482723  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:27.513148  459274 cri.go:89] found id: ""
	I1202 22:04:27.513181  459274 logs.go:282] 0 containers: []
	W1202 22:04:27.513192  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:27.513217  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:27.513302  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:27.547657  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:27.547681  459274 cri.go:89] found id: ""
	I1202 22:04:27.547689  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:27.547785  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:27.552186  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:27.552301  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:27.577737  459274 cri.go:89] found id: ""
	I1202 22:04:27.577811  459274 logs.go:282] 0 containers: []
	W1202 22:04:27.577834  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:27.577854  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:27.577946  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:27.611082  459274 cri.go:89] found id: ""
	I1202 22:04:27.611147  459274 logs.go:282] 0 containers: []
	W1202 22:04:27.611173  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:27.611195  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:27.611220  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:27.673630  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:27.673673  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:27.743421  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:27.743448  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:27.743467  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:27.784761  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:27.784801  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:27.814694  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:27.814722  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:27.831277  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:27.831307  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:27.863688  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:27.863718  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:27.905849  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:27.905882  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:27.941921  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:27.941952  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:30.473091  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:30.486804  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:30.486890  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:30.536036  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:30.536067  459274 cri.go:89] found id: ""
	I1202 22:04:30.536076  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:30.536132  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:30.540644  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:30.540710  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:30.584510  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:30.584572  459274 cri.go:89] found id: ""
	I1202 22:04:30.584595  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:30.584679  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:30.588831  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:30.588905  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:30.628519  459274 cri.go:89] found id: ""
	I1202 22:04:30.628540  459274 logs.go:282] 0 containers: []
	W1202 22:04:30.628554  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:30.628561  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:30.628620  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:30.654205  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:30.654225  459274 cri.go:89] found id: ""
	I1202 22:04:30.654233  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:30.654289  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:30.658666  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:30.658744  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:30.715573  459274 cri.go:89] found id: ""
	I1202 22:04:30.715660  459274 logs.go:282] 0 containers: []
	W1202 22:04:30.715683  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:30.715704  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:30.715797  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:30.744682  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:30.744705  459274 cri.go:89] found id: ""
	I1202 22:04:30.744713  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:30.744768  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:30.752978  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:30.753053  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:30.785464  459274 cri.go:89] found id: ""
	I1202 22:04:30.785491  459274 logs.go:282] 0 containers: []
	W1202 22:04:30.785499  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:30.785506  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:30.785571  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:30.812956  459274 cri.go:89] found id: ""
	I1202 22:04:30.812983  459274 logs.go:282] 0 containers: []
	W1202 22:04:30.812992  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:30.813007  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:30.813019  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:30.869919  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:30.869957  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:30.950357  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:30.950424  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:30.985271  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:30.985341  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:31.035500  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:31.035570  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:31.121180  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:31.121273  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:31.171585  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:31.171658  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:31.240635  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:31.240674  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:31.261226  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:31.261258  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:31.351319  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:33.852234  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:33.863027  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:04:33.863098  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:04:33.893561  459274 cri.go:89] found id: "aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:33.893587  459274 cri.go:89] found id: ""
	I1202 22:04:33.893596  459274 logs.go:282] 1 containers: [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4]
	I1202 22:04:33.893672  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:33.897924  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:04:33.898003  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:04:33.927166  459274 cri.go:89] found id: "7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:33.927186  459274 cri.go:89] found id: ""
	I1202 22:04:33.927195  459274 logs.go:282] 1 containers: [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d]
	I1202 22:04:33.927264  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:33.931568  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:04:33.931643  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:04:33.960079  459274 cri.go:89] found id: ""
	I1202 22:04:33.960103  459274 logs.go:282] 0 containers: []
	W1202 22:04:33.960112  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:04:33.960119  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:04:33.960177  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:04:33.988776  459274 cri.go:89] found id: "4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:33.988801  459274 cri.go:89] found id: ""
	I1202 22:04:33.988810  459274 logs.go:282] 1 containers: [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3]
	I1202 22:04:33.988874  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:33.992960  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:04:33.993034  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:04:34.024421  459274 cri.go:89] found id: ""
	I1202 22:04:34.024448  459274 logs.go:282] 0 containers: []
	W1202 22:04:34.024457  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:04:34.024463  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:04:34.024538  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:04:34.064191  459274 cri.go:89] found id: "b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:34.064215  459274 cri.go:89] found id: ""
	I1202 22:04:34.064224  459274 logs.go:282] 1 containers: [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0]
	I1202 22:04:34.064284  459274 ssh_runner.go:195] Run: which crictl
	I1202 22:04:34.069973  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:04:34.070046  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:04:34.098807  459274 cri.go:89] found id: ""
	I1202 22:04:34.098832  459274 logs.go:282] 0 containers: []
	W1202 22:04:34.098841  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:04:34.098848  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:04:34.098907  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:04:34.131412  459274 cri.go:89] found id: ""
	I1202 22:04:34.131482  459274 logs.go:282] 0 containers: []
	W1202 22:04:34.131506  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:04:34.131526  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:04:34.131539  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:04:34.202911  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:04:34.202930  459274 logs.go:123] Gathering logs for kube-apiserver [aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4] ...
	I1202 22:04:34.202943  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4"
	I1202 22:04:34.238229  459274 logs.go:123] Gathering logs for kube-controller-manager [b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0] ...
	I1202 22:04:34.238265  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0"
	I1202 22:04:34.275764  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:04:34.275804  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:04:34.307460  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:04:34.307486  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:04:34.366459  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:04:34.366497  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:04:34.383250  459274 logs.go:123] Gathering logs for etcd [7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d] ...
	I1202 22:04:34.383282  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d"
	I1202 22:04:34.419357  459274 logs.go:123] Gathering logs for kube-scheduler [4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3] ...
	I1202 22:04:34.419389  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3"
	I1202 22:04:34.456267  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:04:34.456297  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:04:36.990526  459274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:04:37.001160  459274 kubeadm.go:602] duration metric: took 4m4.930224083s to restartPrimaryControlPlane
	W1202 22:04:37.001234  459274 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1202 22:04:37.001299  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 22:04:37.528743  459274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:04:37.542161  459274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:04:37.555117  459274 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:04:37.555178  459274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:04:37.566397  459274 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:04:37.566415  459274 kubeadm.go:158] found existing configuration files:
	
	I1202 22:04:37.566469  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:04:37.576690  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:04:37.576760  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:04:37.587620  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:04:37.597150  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:04:37.597217  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:04:37.605626  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:04:37.614250  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:04:37.614370  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:04:37.627169  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:04:37.635606  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:04:37.635725  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:04:37.646298  459274 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:04:37.701110  459274 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:04:37.701345  459274 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:04:37.800639  459274 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:04:37.800761  459274 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:04:37.800827  459274 kubeadm.go:319] OS: Linux
	I1202 22:04:37.800931  459274 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:04:37.801028  459274 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:04:37.801081  459274 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:04:37.801131  459274 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:04:37.801181  459274 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:04:37.801229  459274 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:04:37.801275  459274 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:04:37.801323  459274 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:04:37.801371  459274 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:04:37.888958  459274 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:04:37.889180  459274 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:04:37.889302  459274 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:04:37.895530  459274 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:04:37.902474  459274 out.go:252]   - Generating certificates and keys ...
	I1202 22:04:37.902618  459274 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:04:37.902730  459274 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:04:37.902853  459274 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:04:37.902946  459274 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:04:37.903065  459274 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:04:37.903156  459274 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:04:37.903243  459274 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:04:37.903322  459274 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:04:37.903614  459274 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:04:37.906091  459274 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:04:37.906718  459274 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:04:37.907017  459274 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:04:38.104451  459274 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:04:38.616052  459274 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:04:38.847360  459274 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:04:39.065144  459274 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:04:39.528598  459274 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:04:39.529337  459274 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:04:39.532088  459274 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:04:39.535596  459274 out.go:252]   - Booting up control plane ...
	I1202 22:04:39.535709  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:04:39.535787  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:04:39.535853  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:04:39.567576  459274 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:04:39.567682  459274 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:04:39.578095  459274 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:04:39.578767  459274 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:04:39.578816  459274 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:04:39.779457  459274 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:04:39.779599  459274 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:08:39.780187  459274 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000925699s
	I1202 22:08:39.780220  459274 kubeadm.go:319] 
	I1202 22:08:39.780278  459274 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:08:39.780314  459274 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:08:39.780420  459274 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:08:39.780426  459274 kubeadm.go:319] 
	I1202 22:08:39.780530  459274 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:08:39.780561  459274 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:08:39.780592  459274 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:08:39.780599  459274 kubeadm.go:319] 
	I1202 22:08:39.788257  459274 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:08:39.788702  459274 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:08:39.788814  459274 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:08:39.789048  459274 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:08:39.789054  459274 kubeadm.go:319] 
	I1202 22:08:39.789126  459274 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 22:08:39.789241  459274 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000925699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000925699s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 22:08:39.789323  459274 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 22:08:40.211969  459274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:08:40.227498  459274 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:08:40.227566  459274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:08:40.236374  459274 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:08:40.236398  459274 kubeadm.go:158] found existing configuration files:
	
	I1202 22:08:40.236451  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:08:40.244993  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:08:40.245062  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:08:40.253164  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:08:40.261337  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:08:40.261416  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:08:40.269335  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:08:40.277931  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:08:40.278004  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:08:40.286630  459274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:08:40.295050  459274 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:08:40.295125  459274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:08:40.303199  459274 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:08:40.352580  459274 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:08:40.352665  459274 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:08:40.435181  459274 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:08:40.435249  459274 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:08:40.435283  459274 kubeadm.go:319] OS: Linux
	I1202 22:08:40.435325  459274 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:08:40.435370  459274 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:08:40.435414  459274 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:08:40.435458  459274 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:08:40.435503  459274 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:08:40.435552  459274 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:08:40.435595  459274 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:08:40.435640  459274 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:08:40.435683  459274 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:08:40.505484  459274 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:08:40.505644  459274 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:08:40.505776  459274 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:08:40.514072  459274 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:08:40.519634  459274 out.go:252]   - Generating certificates and keys ...
	I1202 22:08:40.519743  459274 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:08:40.519832  459274 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:08:40.519933  459274 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:08:40.520008  459274 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:08:40.520105  459274 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:08:40.520177  459274 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:08:40.520254  459274 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:08:40.520329  459274 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:08:40.520430  459274 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:08:40.520513  459274 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:08:40.520553  459274 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:08:40.520617  459274 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:08:40.802566  459274 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:08:41.188086  459274 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:08:41.270395  459274 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:08:42.191160  459274 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:08:42.590565  459274 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:08:42.591160  459274 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:08:42.593718  459274 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:08:42.596884  459274 out.go:252]   - Booting up control plane ...
	I1202 22:08:42.596997  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:08:42.597076  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:08:42.597934  459274 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:08:42.620165  459274 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:08:42.620282  459274 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:08:42.628063  459274 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:08:42.628391  459274 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:08:42.628683  459274 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:08:42.767877  459274 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:08:42.767997  459274 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:12:42.768146  459274 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000568173s
	I1202 22:12:42.768178  459274 kubeadm.go:319] 
	I1202 22:12:42.768235  459274 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:12:42.768268  459274 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:12:42.768372  459274 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:12:42.768378  459274 kubeadm.go:319] 
	I1202 22:12:42.768483  459274 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:12:42.768523  459274 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:12:42.768555  459274 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:12:42.768559  459274 kubeadm.go:319] 
	I1202 22:12:42.772849  459274 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:12:42.773257  459274 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:12:42.773366  459274 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:12:42.773590  459274 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:12:42.773600  459274 kubeadm.go:319] 
	I1202 22:12:42.773683  459274 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:12:42.773744  459274 kubeadm.go:403] duration metric: took 12m10.765327212s to StartCluster
	I1202 22:12:42.773784  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:12:42.773842  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:12:42.799579  459274 cri.go:89] found id: ""
	I1202 22:12:42.799603  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.799611  459274 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:12:42.799618  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:12:42.799682  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:12:42.827888  459274 cri.go:89] found id: ""
	I1202 22:12:42.827911  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.827920  459274 logs.go:284] No container was found matching "etcd"
	I1202 22:12:42.827926  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:12:42.827984  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:12:42.852837  459274 cri.go:89] found id: ""
	I1202 22:12:42.852862  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.852871  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:12:42.852877  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:12:42.852935  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:12:42.878118  459274 cri.go:89] found id: ""
	I1202 22:12:42.878143  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.878151  459274 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:12:42.878158  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:12:42.878222  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:12:42.903579  459274 cri.go:89] found id: ""
	I1202 22:12:42.903604  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.903612  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:12:42.903618  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:12:42.903686  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:12:42.928587  459274 cri.go:89] found id: ""
	I1202 22:12:42.928612  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.928621  459274 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:12:42.928628  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:12:42.928685  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:12:42.953728  459274 cri.go:89] found id: ""
	I1202 22:12:42.953752  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.953770  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:12:42.953777  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:12:42.953835  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:12:42.979320  459274 cri.go:89] found id: ""
	I1202 22:12:42.979343  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.979351  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:12:42.979361  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:12:42.979376  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:12:42.996435  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:12:42.996516  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:12:43.060407  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:12:43.060471  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:12:43.060490  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:12:43.101101  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:12:43.101135  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:12:43.132122  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:12:43.132189  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 22:12:43.194958  459274 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:12:43.195006  459274 out.go:285] * 
	* 
	W1202 22:12:43.195066  459274 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:12:43.195083  459274 out.go:285] * 
	* 
	W1202 22:12:43.197231  459274 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:12:43.202174  459274 out.go:203] 
	W1202 22:12:43.205949  459274 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:12:43.206040  459274 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:12:43.206072  459274 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:12:43.209591  459274 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-578337 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-578337 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-578337 version --output=json: exit status 1 (92.124295ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-02 22:12:43.885782681 +0000 UTC m=+4987.819445787
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-578337
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-578337:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a",
	        "Created": "2025-12-02T21:59:44.9193044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 459472,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:00:13.832613693Z",
	            "FinishedAt": "2025-12-02T22:00:12.599256551Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a/hostname",
	        "HostsPath": "/var/lib/docker/containers/71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a/hosts",
	        "LogPath": "/var/lib/docker/containers/71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a/71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a-json.log",
	        "Name": "/kubernetes-upgrade-578337",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-578337:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-578337",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "71bbc00f8f327cc7ea29e3e4394402692f4f7f4301067af0b174cc26f70f4b2a",
	                "LowerDir": "/var/lib/docker/overlay2/cb1ab8bb5458511e9dffbf38755d9d7110a1a0fec82d65b393d2fa08307a6581-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb1ab8bb5458511e9dffbf38755d9d7110a1a0fec82d65b393d2fa08307a6581/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb1ab8bb5458511e9dffbf38755d9d7110a1a0fec82d65b393d2fa08307a6581/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb1ab8bb5458511e9dffbf38755d9d7110a1a0fec82d65b393d2fa08307a6581/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-578337",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-578337/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-578337",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-578337",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-578337",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd6801df2c0173d1f56df64b035d558014faa55098ba5eead9abd63179d0fa7b",
	            "SandboxKey": "/var/run/docker/netns/cd6801df2c01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-578337": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:5b:91:93:11:95",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c3ddcf7e1ec22c574ee8ee193936575250b0f4761fe33512156a534ec2f334ed",
	                    "EndpointID": "585281783a9aa9b126071f6efa55e199849086830d043205b120b7f61e8c8a17",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-578337",
	                        "71bbc00f8f32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-578337 -n kubernetes-upgrade-578337
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-578337 -n kubernetes-upgrade-578337: exit status 2 (353.495498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-578337 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-577910 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo containerd config dump                                                                                                                                                                                                        │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ ssh     │ -p cilium-577910 sudo crio config                                                                                                                                                                                                                   │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │                     │
	│ delete  │ -p cilium-577910                                                                                                                                                                                                                                    │ cilium-577910            │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │ 02 Dec 25 22:05 UTC │
	│ start   │ -p force-systemd-env-573431 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-573431 │ jenkins │ v1.37.0 │ 02 Dec 25 22:05 UTC │ 02 Dec 25 22:06 UTC │
	│ ssh     │ force-systemd-env-573431 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-573431 │ jenkins │ v1.37.0 │ 02 Dec 25 22:06 UTC │ 02 Dec 25 22:06 UTC │
	│ delete  │ -p force-systemd-env-573431                                                                                                                                                                                                                         │ force-systemd-env-573431 │ jenkins │ v1.37.0 │ 02 Dec 25 22:06 UTC │ 02 Dec 25 22:06 UTC │
	│ start   │ -p cert-expiration-859548 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-859548   │ jenkins │ v1.37.0 │ 02 Dec 25 22:06 UTC │ 02 Dec 25 22:06 UTC │
	│ start   │ -p cert-expiration-859548 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-859548   │ jenkins │ v1.37.0 │ 02 Dec 25 22:09 UTC │ 02 Dec 25 22:09 UTC │
	│ delete  │ -p cert-expiration-859548                                                                                                                                                                                                                           │ cert-expiration-859548   │ jenkins │ v1.37.0 │ 02 Dec 25 22:09 UTC │ 02 Dec 25 22:09 UTC │
	│ start   │ -p cert-options-309892 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-309892      │ jenkins │ v1.37.0 │ 02 Dec 25 22:09 UTC │ 02 Dec 25 22:10 UTC │
	│ ssh     │ cert-options-309892 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-309892      │ jenkins │ v1.37.0 │ 02 Dec 25 22:10 UTC │ 02 Dec 25 22:10 UTC │
	│ ssh     │ -p cert-options-309892 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-309892      │ jenkins │ v1.37.0 │ 02 Dec 25 22:10 UTC │ 02 Dec 25 22:10 UTC │
	│ delete  │ -p cert-options-309892                                                                                                                                                                                                                              │ cert-options-309892      │ jenkins │ v1.37.0 │ 02 Dec 25 22:10 UTC │ 02 Dec 25 22:10 UTC │
	│ start   │ -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-996157   │ jenkins │ v1.37.0 │ 02 Dec 25 22:10 UTC │ 02 Dec 25 22:11 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-996157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-996157   │ jenkins │ v1.37.0 │ 02 Dec 25 22:11 UTC │ 02 Dec 25 22:11 UTC │
	│ stop    │ -p old-k8s-version-996157 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-996157   │ jenkins │ v1.37.0 │ 02 Dec 25 22:11 UTC │ 02 Dec 25 22:11 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-996157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-996157   │ jenkins │ v1.37.0 │ 02 Dec 25 22:11 UTC │ 02 Dec 25 22:11 UTC │
	│ start   │ -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-996157   │ jenkins │ v1.37.0 │ 02 Dec 25 22:11 UTC │ 02 Dec 25 22:12 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:11:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:11:52.561427  506652 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:11:52.561677  506652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:11:52.561709  506652 out.go:374] Setting ErrFile to fd 2...
	I1202 22:11:52.561730  506652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:11:52.561997  506652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:11:52.562424  506652 out.go:368] Setting JSON to false
	I1202 22:11:52.563334  506652 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14051,"bootTime":1764699462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:11:52.563445  506652 start.go:143] virtualization:  
	I1202 22:11:52.566429  506652 out.go:179] * [old-k8s-version-996157] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:11:52.570372  506652 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:11:52.570485  506652 notify.go:221] Checking for updates...
	I1202 22:11:52.577137  506652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:11:52.579965  506652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:11:52.582852  506652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:11:52.585748  506652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:11:52.588541  506652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:11:52.591953  506652 config.go:182] Loaded profile config "old-k8s-version-996157": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1202 22:11:52.595480  506652 out.go:179] * Kubernetes 1.34.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.2
	I1202 22:11:52.598231  506652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:11:52.633033  506652 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:11:52.633155  506652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:11:52.709019  506652 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:11:52.699759096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:11:52.709122  506652 docker.go:319] overlay module found
	I1202 22:11:52.712202  506652 out.go:179] * Using the docker driver based on existing profile
	I1202 22:11:52.715021  506652 start.go:309] selected driver: docker
	I1202 22:11:52.715038  506652 start.go:927] validating driver "docker" against &{Name:old-k8s-version-996157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-996157 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountStr
ing: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:11:52.715144  506652 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:11:52.715865  506652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:11:52.776944  506652 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:11:52.767126072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:11:52.777284  506652 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:11:52.777318  506652 cni.go:84] Creating CNI manager for ""
	I1202 22:11:52.777383  506652 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:11:52.777426  506652 start.go:353] cluster config:
	{Name:old-k8s-version-996157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-996157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:11:52.780686  506652 out.go:179] * Starting "old-k8s-version-996157" primary control-plane node in "old-k8s-version-996157" cluster
	I1202 22:11:52.783571  506652 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:11:52.786450  506652 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:11:52.789165  506652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:11:52.789332  506652 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 22:11:52.789363  506652 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1202 22:11:52.789371  506652 cache.go:65] Caching tarball of preloaded images
	I1202 22:11:52.789440  506652 preload.go:238] Found /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1202 22:11:52.789454  506652 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1202 22:11:52.789570  506652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/config.json ...
	I1202 22:11:52.808624  506652 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:11:52.808646  506652 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:11:52.808667  506652 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:11:52.808697  506652 start.go:360] acquireMachinesLock for old-k8s-version-996157: {Name:mk27fdd208a1e42803351f885d6ad8593acbd4f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:11:52.808758  506652 start.go:364] duration metric: took 36.766µs to acquireMachinesLock for "old-k8s-version-996157"
	I1202 22:11:52.808780  506652 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:11:52.808790  506652 fix.go:54] fixHost starting: 
	I1202 22:11:52.809051  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:52.827104  506652 fix.go:112] recreateIfNeeded on old-k8s-version-996157: state=Stopped err=<nil>
	W1202 22:11:52.827137  506652 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:11:52.830464  506652 out.go:252] * Restarting existing docker container for "old-k8s-version-996157" ...
	I1202 22:11:52.830545  506652 cli_runner.go:164] Run: docker start old-k8s-version-996157
	I1202 22:11:53.088598  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:53.110264  506652 kic.go:430] container "old-k8s-version-996157" state is running.
	I1202 22:11:53.110657  506652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-996157
	I1202 22:11:53.142304  506652 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/config.json ...
	I1202 22:11:53.142534  506652 machine.go:94] provisionDockerMachine start ...
	I1202 22:11:53.142593  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:53.170023  506652 main.go:143] libmachine: Using SSH client type: native
	I1202 22:11:53.170350  506652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1202 22:11:53.170360  506652 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:11:53.171009  506652 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52990->127.0.0.1:33383: read: connection reset by peer
	I1202 22:11:56.316949  506652 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-996157
	
	I1202 22:11:56.317035  506652 ubuntu.go:182] provisioning hostname "old-k8s-version-996157"
	I1202 22:11:56.317141  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:56.334494  506652 main.go:143] libmachine: Using SSH client type: native
	I1202 22:11:56.334808  506652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1202 22:11:56.334827  506652 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-996157 && echo "old-k8s-version-996157" | sudo tee /etc/hostname
	I1202 22:11:56.491213  506652 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-996157
	
	I1202 22:11:56.491308  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:56.510080  506652 main.go:143] libmachine: Using SSH client type: native
	I1202 22:11:56.510422  506652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1202 22:11:56.510448  506652 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-996157' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-996157/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-996157' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:11:56.662098  506652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:11:56.662126  506652 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:11:56.662155  506652 ubuntu.go:190] setting up certificates
	I1202 22:11:56.662171  506652 provision.go:84] configureAuth start
	I1202 22:11:56.662286  506652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-996157
	I1202 22:11:56.679431  506652 provision.go:143] copyHostCerts
	I1202 22:11:56.679507  506652 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:11:56.679528  506652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:11:56.679607  506652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:11:56.679755  506652 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:11:56.679767  506652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:11:56.679795  506652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:11:56.679849  506652 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:11:56.679858  506652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:11:56.679881  506652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:11:56.679926  506652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-996157 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-996157]
	I1202 22:11:56.772278  506652 provision.go:177] copyRemoteCerts
	I1202 22:11:56.772344  506652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:11:56.772399  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:56.793207  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:56.909326  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:11:56.926190  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1202 22:11:56.943113  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 22:11:56.960703  506652 provision.go:87] duration metric: took 298.505373ms to configureAuth
	I1202 22:11:56.960732  506652 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:11:56.960937  506652 config.go:182] Loaded profile config "old-k8s-version-996157": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1202 22:11:56.960951  506652 machine.go:97] duration metric: took 3.818409158s to provisionDockerMachine
	I1202 22:11:56.960960  506652 start.go:293] postStartSetup for "old-k8s-version-996157" (driver="docker")
	I1202 22:11:56.960977  506652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:11:56.961033  506652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:11:56.961081  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:56.979016  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:57.081403  506652 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:11:57.084516  506652 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:11:57.084542  506652 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:11:57.084553  506652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:11:57.084607  506652 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:11:57.084680  506652 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:11:57.084774  506652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:11:57.092088  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:11:57.109125  506652 start.go:296] duration metric: took 148.142273ms for postStartSetup
	I1202 22:11:57.109217  506652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:11:57.109259  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:57.126512  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:57.226370  506652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:11:57.230831  506652 fix.go:56] duration metric: took 4.422035464s for fixHost
	I1202 22:11:57.230855  506652 start.go:83] releasing machines lock for "old-k8s-version-996157", held for 4.422086474s
	I1202 22:11:57.230922  506652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-996157
	I1202 22:11:57.248216  506652 ssh_runner.go:195] Run: cat /version.json
	I1202 22:11:57.248277  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:57.248568  506652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:11:57.248629  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:57.274936  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:57.275481  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:57.377139  506652 ssh_runner.go:195] Run: systemctl --version
	I1202 22:11:57.480669  506652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:11:57.485172  506652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:11:57.485250  506652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:11:57.493068  506652 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:11:57.493092  506652 start.go:496] detecting cgroup driver to use...
	I1202 22:11:57.493140  506652 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:11:57.493201  506652 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:11:57.510820  506652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:11:57.524887  506652 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:11:57.524963  506652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:11:57.543830  506652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:11:57.558806  506652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:11:57.683815  506652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:11:57.801023  506652 docker.go:234] disabling docker service ...
	I1202 22:11:57.801109  506652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:11:57.816755  506652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:11:57.829446  506652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:11:57.939404  506652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:11:58.063243  506652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:11:58.076530  506652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:11:58.091062  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1202 22:11:58.100903  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:11:58.110276  506652 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:11:58.110389  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:11:58.118783  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:11:58.127297  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:11:58.135507  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:11:58.144044  506652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:11:58.152560  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:11:58.161036  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:11:58.170807  506652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:11:58.179922  506652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:11:58.188742  506652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:11:58.196063  506652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:11:58.304319  506652 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:11:58.457440  506652 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:11:58.457578  506652 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:11:58.462011  506652 start.go:564] Will wait 60s for crictl version
	I1202 22:11:58.462174  506652 ssh_runner.go:195] Run: which crictl
	I1202 22:11:58.466216  506652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:11:58.492693  506652 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:11:58.492820  506652 ssh_runner.go:195] Run: containerd --version
	I1202 22:11:58.512390  506652 ssh_runner.go:195] Run: containerd --version
	I1202 22:11:58.536530  506652 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1202 22:11:58.539587  506652 cli_runner.go:164] Run: docker network inspect old-k8s-version-996157 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:11:58.557951  506652 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:11:58.561978  506652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:11:58.571627  506652 kubeadm.go:884] updating cluster {Name:old-k8s-version-996157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-996157 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:11:58.571745  506652 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 22:11:58.571810  506652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:11:58.596292  506652 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:11:58.596320  506652 containerd.go:534] Images already preloaded, skipping extraction
	I1202 22:11:58.596379  506652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:11:58.620589  506652 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:11:58.620610  506652 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:11:58.620618  506652 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1202 22:11:58.620716  506652 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-996157 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-996157 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:11:58.620777  506652 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:11:58.645083  506652 cni.go:84] Creating CNI manager for ""
	I1202 22:11:58.645158  506652 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:11:58.645189  506652 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:11:58.645239  506652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-996157 NodeName:old-k8s-version-996157 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:11:58.645389  506652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-996157"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:11:58.645500  506652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1202 22:11:58.653048  506652 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:11:58.653126  506652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:11:58.660280  506652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1202 22:11:58.672583  506652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:11:58.684862  506652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1202 22:11:58.697356  506652 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:11:58.700969  506652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:11:58.710797  506652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:11:58.831707  506652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:11:58.850534  506652 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157 for IP: 192.168.85.2
	I1202 22:11:58.850568  506652 certs.go:195] generating shared ca certs ...
	I1202 22:11:58.850584  506652 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:11:58.850714  506652 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:11:58.850760  506652 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:11:58.850767  506652 certs.go:257] generating profile certs ...
	I1202 22:11:58.850856  506652 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.key
	I1202 22:11:58.850916  506652 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/apiserver.key.62520c3e
	I1202 22:11:58.850953  506652 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/proxy-client.key
	I1202 22:11:58.851065  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:11:58.851094  506652 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:11:58.851102  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:11:58.851128  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:11:58.851152  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:11:58.851174  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:11:58.851217  506652 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:11:58.851881  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:11:58.874562  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:11:58.896450  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:11:58.917181  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:11:58.938168  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 22:11:58.960072  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:11:58.980197  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:11:59.001867  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:11:59.027659  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:11:59.067427  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:11:59.112155  506652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:11:59.131630  506652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:11:59.144406  506652 ssh_runner.go:195] Run: openssl version
	I1202 22:11:59.153123  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:11:59.162581  506652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:11:59.167437  506652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:11:59.167582  506652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:11:59.210864  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:11:59.218516  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:11:59.226630  506652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:11:59.230259  506652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:11:59.230365  506652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:11:59.276554  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:11:59.284525  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:11:59.292655  506652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:11:59.296402  506652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:11:59.296478  506652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:11:59.337670  506652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:11:59.345600  506652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:11:59.349339  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:11:59.390143  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:11:59.430930  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:11:59.473583  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:11:59.514910  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:11:59.566687  506652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:11:59.638862  506652 kubeadm.go:401] StartCluster: {Name:old-k8s-version-996157 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-996157 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:11:59.639002  506652 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:11:59.639121  506652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:11:59.715482  506652 cri.go:89] found id: "419b8ff3e31dd7dde39d9d77dd85f3ad039915fce679cd09d64be93168469714"
	I1202 22:11:59.715552  506652 cri.go:89] found id: "2f1767ec921a851611d51ab65220d200b95de7e3871b57b45fdab0d65ba88a41"
	I1202 22:11:59.715573  506652 cri.go:89] found id: "3225f442adac008cafcd8b37138349084e416726cadc7832c902ca3525273fbb"
	I1202 22:11:59.715594  506652 cri.go:89] found id: "0985709acc25d48bca386d43cd03b8eb4baaf2990b4f095a430888f64a81aa68"
	I1202 22:11:59.715626  506652 cri.go:89] found id: "25631b424c4adab52a70dc8a8e2677fc6ba69a430a5e26248dbff830b27f991a"
	I1202 22:11:59.715651  506652 cri.go:89] found id: "9e23ba1d74476370e4a0dfcc7461fabf66b03fbcdb222a5259dd7412dd637958"
	I1202 22:11:59.715669  506652 cri.go:89] found id: "38c6ed0b29425a0e4ee9bc228f57e21e1501811312b362fd3f1e9b2f47f708dd"
	I1202 22:11:59.715688  506652 cri.go:89] found id: "85fb80f3347391650deca28a04893aa42d9d80891f63795213f9c06e3580ad94"
	I1202 22:11:59.715708  506652 cri.go:89] found id: ""
	I1202 22:11:59.715782  506652 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1202 22:11:59.743321  506652 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d","pid":840,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d/rootfs","created":"2025-12-02T22:11:59.582748564Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-996157_6d9cf7745670fff796a0aec0836962b1","io.kubernetes.cri.sandbox-memory":"0","
io.kubernetes.cri.sandbox-name":"etcd-old-k8s-version-996157","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6d9cf7745670fff796a0aec0836962b1"},"owner":"root"},{"ociVersion":"1.2.1","id":"66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9","pid":913,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9/rootfs","created":"2025-12-02T22:11:59.640402492Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9","io.kubernetes.cri.sandbox-log-d
irectory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-996157_5f81941e3522178237a7b1be578f1d79","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-old-k8s-version-996157","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5f81941e3522178237a7b1be578f1d79"},"owner":"root"},{"ociVersion":"1.2.1","id":"6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff","pid":956,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff/rootfs","created":"2025-12-02T22:11:59.709136576Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.c
ri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-996157_704c7c209d1ea63585af740c39efedd5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-old-k8s-version-996157","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"704c7c209d1ea63585af740c39efedd5"},"owner":"root"},{"ociVersion":"1.2.1","id":"7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48","pid":970,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kuber
netes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.9","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-996157_ca0b50710e00ec77571cb85c507d6ab6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-old-k8s-version-996157","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ca0b50710e00ec77571cb85c507d6ab6"},"owner":"root"}]
	I1202 22:11:59.743527  506652 cri.go:126] list returned 4 containers
	I1202 22:11:59.743558  506652 cri.go:129] container: {ID:47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d Status:running}
	I1202 22:11:59.743588  506652 cri.go:131] skipping 47194c43989dbdb2b7f6286501c9d7a485911f36ba594d85eb7fa174c1f14f5d - not in ps
	I1202 22:11:59.743618  506652 cri.go:129] container: {ID:66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9 Status:running}
	I1202 22:11:59.743651  506652 cri.go:131] skipping 66149abbdaaf4c1871c70ceb6cd0c4349f812efc53dd493f0667f52532092bd9 - not in ps
	I1202 22:11:59.743668  506652 cri.go:129] container: {ID:6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff Status:created}
	I1202 22:11:59.743688  506652 cri.go:131] skipping 6fcbec86634594e1b6dd3eb8200a5aab053fdda198443da4f872b681ea5c3cff - not in ps
	I1202 22:11:59.743707  506652 cri.go:129] container: {ID:7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48 Status:created}
	I1202 22:11:59.743738  506652 cri.go:131] skipping 7297c14bd48bca9151b7b4171b008856c1033bc0f6dafb4fe177b305bfe31e48 - not in ps
	I1202 22:11:59.743811  506652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:11:59.767333  506652 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:11:59.767391  506652 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:11:59.767486  506652 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:11:59.778531  506652 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:11:59.779140  506652 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-996157" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:11:59.779401  506652 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-996157" cluster setting kubeconfig missing "old-k8s-version-996157" context setting]
	I1202 22:11:59.779872  506652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:11:59.781177  506652 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:11:59.802085  506652 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:11:59.802159  506652 kubeadm.go:602] duration metric: took 34.748496ms to restartPrimaryControlPlane
	I1202 22:11:59.802227  506652 kubeadm.go:403] duration metric: took 163.330178ms to StartCluster
	I1202 22:11:59.802562  506652 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:11:59.805534  506652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:11:59.806671  506652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:11:59.807168  506652 config.go:182] Loaded profile config "old-k8s-version-996157": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1202 22:11:59.807241  506652 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:11:59.807300  506652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:11:59.807388  506652 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-996157"
	I1202 22:11:59.807419  506652 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-996157"
	W1202 22:11:59.807452  506652 addons.go:248] addon storage-provisioner should already be in state true
	I1202 22:11:59.807495  506652 host.go:66] Checking if "old-k8s-version-996157" exists ...
	I1202 22:11:59.808025  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:59.808599  506652 addons.go:70] Setting metrics-server=true in profile "old-k8s-version-996157"
	I1202 22:11:59.808620  506652 addons.go:239] Setting addon metrics-server=true in "old-k8s-version-996157"
	W1202 22:11:59.808627  506652 addons.go:248] addon metrics-server should already be in state true
	I1202 22:11:59.808648  506652 host.go:66] Checking if "old-k8s-version-996157" exists ...
	I1202 22:11:59.809051  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:59.809216  506652 addons.go:70] Setting dashboard=true in profile "old-k8s-version-996157"
	I1202 22:11:59.809260  506652 addons.go:239] Setting addon dashboard=true in "old-k8s-version-996157"
	W1202 22:11:59.809266  506652 addons.go:248] addon dashboard should already be in state true
	I1202 22:11:59.809285  506652 host.go:66] Checking if "old-k8s-version-996157" exists ...
	I1202 22:11:59.809714  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:59.815858  506652 out.go:179] * Verifying Kubernetes components...
	I1202 22:11:59.816003  506652 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-996157"
	I1202 22:11:59.816023  506652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-996157"
	I1202 22:11:59.816354  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:59.818984  506652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:11:59.887492  506652 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:11:59.890659  506652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:11:59.893797  506652 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1202 22:11:59.893862  506652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:11:59.893878  506652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:11:59.893941  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:59.894209  506652 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:11:59.899452  506652 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-996157"
	W1202 22:11:59.899477  506652 addons.go:248] addon default-storageclass should already be in state true
	I1202 22:11:59.899504  506652 host.go:66] Checking if "old-k8s-version-996157" exists ...
	I1202 22:11:59.899931  506652 cli_runner.go:164] Run: docker container inspect old-k8s-version-996157 --format={{.State.Status}}
	I1202 22:11:59.900604  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:11:59.900627  506652 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:11:59.900690  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:59.902444  506652 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 22:11:59.902463  506652 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 22:11:59.902519  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:59.957813  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:59.965804  506652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:11:59.965823  506652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:11:59.965888  506652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-996157
	I1202 22:11:59.966286  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:59.980096  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:11:59.993749  506652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/old-k8s-version-996157/id_rsa Username:docker}
	I1202 22:12:00.369797  506652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:12:00.512052  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:12:00.512085  506652 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:12:00.549950  506652 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-996157" to be "Ready" ...
	I1202 22:12:00.559912  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:12:00.683015  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:12:00.683041  506652 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:12:00.706397  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:12:00.719329  506652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 22:12:00.719354  506652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1202 22:12:00.848481  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:12:00.848508  506652 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1202 22:12:00.903540  506652 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1202 22:12:00.903588  506652 retry.go:31] will retry after 239.348103ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1202 22:12:00.922467  506652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 22:12:00.922500  506652 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 22:12:00.962122  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:12:00.962146  506652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:12:01.027584  506652 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 22:12:01.027608  506652 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 22:12:01.052285  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:12:01.052309  506652 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:12:01.063270  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1202 22:12:01.098956  506652 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1202 22:12:01.098999  506652 retry.go:31] will retry after 342.671629ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1202 22:12:01.131457  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:12:01.131479  506652 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:12:01.143649  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:12:01.186977  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:12:01.187006  506652 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:12:01.258186  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:12:01.258258  506652 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:12:01.419772  506652 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:12:01.419797  506652 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:12:01.442198  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:12:01.546573  506652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:12:05.190303  506652 node_ready.go:49] node "old-k8s-version-996157" is "Ready"
	I1202 22:12:05.190337  506652 node_ready.go:38] duration metric: took 4.6403476s for node "old-k8s-version-996157" to be "Ready" ...
	I1202 22:12:05.190352  506652 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:12:05.190455  506652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:12:07.576507  506652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.513197148s)
	I1202 22:12:07.576539  506652 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-996157"
	I1202 22:12:07.576638  506652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.432944922s)
	I1202 22:12:07.799493  506652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.357242221s)
	I1202 22:12:08.386382  506652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.839700921s)
	I1202 22:12:08.386508  506652 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.196036821s)
	I1202 22:12:08.386531  506652 api_server.go:72] duration metric: took 8.579252844s to wait for apiserver process to appear ...
	I1202 22:12:08.386560  506652 api_server.go:88] waiting for apiserver healthz status ...
	I1202 22:12:08.386590  506652 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1202 22:12:08.390905  506652 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-996157 addons enable metrics-server
	
	I1202 22:12:08.395969  506652 out.go:179] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I1202 22:12:08.399086  506652 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1202 22:12:08.399316  506652 addons.go:530] duration metric: took 8.592017548s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I1202 22:12:08.400511  506652 api_server.go:141] control plane version: v1.28.0
	I1202 22:12:08.400537  506652 api_server.go:131] duration metric: took 13.969601ms to wait for apiserver health ...
	I1202 22:12:08.400545  506652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 22:12:08.409388  506652 system_pods.go:59] 9 kube-system pods found
	I1202 22:12:08.409447  506652 system_pods.go:61] "coredns-5dd5756b68-c9j9t" [3f55665f-d41b-47c3-a616-f6eba2807e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 22:12:08.409458  506652 system_pods.go:61] "etcd-old-k8s-version-996157" [92ac154e-6445-4ade-a8ec-52a0df7edf5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:12:08.409474  506652 system_pods.go:61] "kindnet-rn4lz" [a1a174f7-2658-4c79-8ad2-90978b6d0a62] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 22:12:08.409482  506652 system_pods.go:61] "kube-apiserver-old-k8s-version-996157" [dbf94958-afb8-47e6-90c0-6aa177a6ddd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:12:08.409493  506652 system_pods.go:61] "kube-controller-manager-old-k8s-version-996157" [571a51fc-4073-466b-9f9f-12a335a63fb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 22:12:08.409500  506652 system_pods.go:61] "kube-proxy-fgkhl" [9207a7bd-dc94-45dc-9211-f9ced4c5a28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 22:12:08.409513  506652 system_pods.go:61] "kube-scheduler-old-k8s-version-996157" [0bf38a96-a596-425a-963d-b93448929aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:12:08.409519  506652 system_pods.go:61] "metrics-server-57f55c9bc5-75j6p" [1d80b607-7131-4860-b423-74fdb5579cd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 22:12:08.409525  506652 system_pods.go:61] "storage-provisioner" [1f64a82b-6e18-43eb-ae47-fa81fd900a21] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 22:12:08.409536  506652 system_pods.go:74] duration metric: took 8.985068ms to wait for pod list to return data ...
	I1202 22:12:08.409545  506652 default_sa.go:34] waiting for default service account to be created ...
	I1202 22:12:08.412872  506652 default_sa.go:45] found service account: "default"
	I1202 22:12:08.412899  506652 default_sa.go:55] duration metric: took 3.348287ms for default service account to be created ...
	I1202 22:12:08.412909  506652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 22:12:08.424012  506652 system_pods.go:86] 9 kube-system pods found
	I1202 22:12:08.424052  506652 system_pods.go:89] "coredns-5dd5756b68-c9j9t" [3f55665f-d41b-47c3-a616-f6eba2807e21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 22:12:08.424061  506652 system_pods.go:89] "etcd-old-k8s-version-996157" [92ac154e-6445-4ade-a8ec-52a0df7edf5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 22:12:08.424092  506652 system_pods.go:89] "kindnet-rn4lz" [a1a174f7-2658-4c79-8ad2-90978b6d0a62] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1202 22:12:08.424107  506652 system_pods.go:89] "kube-apiserver-old-k8s-version-996157" [dbf94958-afb8-47e6-90c0-6aa177a6ddd3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 22:12:08.424114  506652 system_pods.go:89] "kube-controller-manager-old-k8s-version-996157" [571a51fc-4073-466b-9f9f-12a335a63fb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 22:12:08.424127  506652 system_pods.go:89] "kube-proxy-fgkhl" [9207a7bd-dc94-45dc-9211-f9ced4c5a28a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 22:12:08.424133  506652 system_pods.go:89] "kube-scheduler-old-k8s-version-996157" [0bf38a96-a596-425a-963d-b93448929aa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 22:12:08.424150  506652 system_pods.go:89] "metrics-server-57f55c9bc5-75j6p" [1d80b607-7131-4860-b423-74fdb5579cd4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1202 22:12:08.424170  506652 system_pods.go:89] "storage-provisioner" [1f64a82b-6e18-43eb-ae47-fa81fd900a21] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 22:12:08.424186  506652 system_pods.go:126] duration metric: took 11.249283ms to wait for k8s-apps to be running ...
	I1202 22:12:08.424205  506652 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 22:12:08.424282  506652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:12:08.449357  506652 system_svc.go:56] duration metric: took 25.15199ms WaitForService to wait for kubelet
	I1202 22:12:08.449384  506652 kubeadm.go:587] duration metric: took 8.64210418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:12:08.449424  506652 node_conditions.go:102] verifying NodePressure condition ...
	I1202 22:12:08.453491  506652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1202 22:12:08.453520  506652 node_conditions.go:123] node cpu capacity is 2
	I1202 22:12:08.453532  506652 node_conditions.go:105] duration metric: took 4.096172ms to run NodePressure ...
	I1202 22:12:08.453566  506652 start.go:242] waiting for startup goroutines ...
	I1202 22:12:08.453581  506652 start.go:247] waiting for cluster config update ...
	I1202 22:12:08.453593  506652 start.go:256] writing updated cluster config ...
	I1202 22:12:08.453903  506652 ssh_runner.go:195] Run: rm -f paused
	I1202 22:12:08.458111  506652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:12:08.468524  506652 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-c9j9t" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 22:12:10.474921  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:12.974428  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:14.974831  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:16.975363  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:19.474623  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:21.475551  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:23.974002  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:26.475906  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:28.976148  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:31.474185  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:33.474494  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:35.474569  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	W1202 22:12:37.478779  506652 pod_ready.go:104] pod "coredns-5dd5756b68-c9j9t" is not "Ready", error: <nil>
	I1202 22:12:38.974678  506652 pod_ready.go:94] pod "coredns-5dd5756b68-c9j9t" is "Ready"
	I1202 22:12:38.974707  506652 pod_ready.go:86] duration metric: took 30.506155648s for pod "coredns-5dd5756b68-c9j9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:38.977715  506652 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:38.982403  506652 pod_ready.go:94] pod "etcd-old-k8s-version-996157" is "Ready"
	I1202 22:12:38.982488  506652 pod_ready.go:86] duration metric: took 4.745631ms for pod "etcd-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:38.986137  506652 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:38.993887  506652 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-996157" is "Ready"
	I1202 22:12:38.993915  506652 pod_ready.go:86] duration metric: took 7.749813ms for pod "kube-apiserver-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:38.997220  506652 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:39.172337  506652 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-996157" is "Ready"
	I1202 22:12:39.172414  506652 pod_ready.go:86] duration metric: took 175.167556ms for pod "kube-controller-manager-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:39.373122  506652 pod_ready.go:83] waiting for pod "kube-proxy-fgkhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:39.771884  506652 pod_ready.go:94] pod "kube-proxy-fgkhl" is "Ready"
	I1202 22:12:39.771912  506652 pod_ready.go:86] duration metric: took 398.756822ms for pod "kube-proxy-fgkhl" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:39.972506  506652 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:40.372274  506652 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-996157" is "Ready"
	I1202 22:12:40.372305  506652 pod_ready.go:86] duration metric: took 399.713562ms for pod "kube-scheduler-old-k8s-version-996157" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 22:12:40.372352  506652 pod_ready.go:40] duration metric: took 31.914206662s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 22:12:40.430653  506652 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1202 22:12:40.433806  506652 out.go:203] 
	W1202 22:12:40.436743  506652 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1202 22:12:40.439577  506652 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1202 22:12:40.442481  506652 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-996157" cluster and "default" namespace by default
	I1202 22:12:42.768146  459274 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000568173s
	I1202 22:12:42.768178  459274 kubeadm.go:319] 
	I1202 22:12:42.768235  459274 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:12:42.768268  459274 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:12:42.768372  459274 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:12:42.768378  459274 kubeadm.go:319] 
	I1202 22:12:42.768483  459274 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:12:42.768523  459274 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:12:42.768555  459274 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:12:42.768559  459274 kubeadm.go:319] 
	I1202 22:12:42.772849  459274 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:12:42.773257  459274 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:12:42.773366  459274 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:12:42.773590  459274 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:12:42.773600  459274 kubeadm.go:319] 
	I1202 22:12:42.773683  459274 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:12:42.773744  459274 kubeadm.go:403] duration metric: took 12m10.765327212s to StartCluster
	I1202 22:12:42.773784  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:12:42.773842  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:12:42.799579  459274 cri.go:89] found id: ""
	I1202 22:12:42.799603  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.799611  459274 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:12:42.799618  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:12:42.799682  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:12:42.827888  459274 cri.go:89] found id: ""
	I1202 22:12:42.827911  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.827920  459274 logs.go:284] No container was found matching "etcd"
	I1202 22:12:42.827926  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:12:42.827984  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:12:42.852837  459274 cri.go:89] found id: ""
	I1202 22:12:42.852862  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.852871  459274 logs.go:284] No container was found matching "coredns"
	I1202 22:12:42.852877  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:12:42.852935  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:12:42.878118  459274 cri.go:89] found id: ""
	I1202 22:12:42.878143  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.878151  459274 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:12:42.878158  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:12:42.878222  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:12:42.903579  459274 cri.go:89] found id: ""
	I1202 22:12:42.903604  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.903612  459274 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:12:42.903618  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:12:42.903686  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:12:42.928587  459274 cri.go:89] found id: ""
	I1202 22:12:42.928612  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.928621  459274 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:12:42.928628  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:12:42.928685  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:12:42.953728  459274 cri.go:89] found id: ""
	I1202 22:12:42.953752  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.953770  459274 logs.go:284] No container was found matching "kindnet"
	I1202 22:12:42.953777  459274 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1202 22:12:42.953835  459274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 22:12:42.979320  459274 cri.go:89] found id: ""
	I1202 22:12:42.979343  459274 logs.go:282] 0 containers: []
	W1202 22:12:42.979351  459274 logs.go:284] No container was found matching "storage-provisioner"
	I1202 22:12:42.979361  459274 logs.go:123] Gathering logs for dmesg ...
	I1202 22:12:42.979376  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:12:42.996435  459274 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:12:42.996516  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:12:43.060407  459274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:12:43.060471  459274 logs.go:123] Gathering logs for containerd ...
	I1202 22:12:43.060490  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:12:43.101101  459274 logs.go:123] Gathering logs for container status ...
	I1202 22:12:43.101135  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:12:43.132122  459274 logs.go:123] Gathering logs for kubelet ...
	I1202 22:12:43.132189  459274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1202 22:12:43.194958  459274 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:12:43.195006  459274 out.go:285] * 
	W1202 22:12:43.195066  459274 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:12:43.195083  459274 out.go:285] * 
	W1202 22:12:43.197231  459274 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:12:43.202174  459274 out.go:203] 
	W1202 22:12:43.205949  459274 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000568173s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:12:43.206040  459274 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:12:43.206072  459274 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:12:43.209591  459274 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.466352720Z" level=info msg="StopPodSandbox for \"4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263\" returns successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.466737259Z" level=info msg="RemovePodSandbox for \"4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.466772490Z" level=info msg="Forcibly stopping sandbox \"4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.466826577Z" level=info msg="Container to stop \"7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.467180463Z" level=info msg="TearDown network for sandbox \"4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263\" successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.475918909Z" level=info msg="Ensure that sandbox 4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263 in task-service has been cleanup successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.499940135Z" level=info msg="RemovePodSandbox \"4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263\" returns successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.500676377Z" level=info msg="StopPodSandbox for \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.500845103Z" level=info msg="Container to stop \"b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.501342550Z" level=info msg="TearDown network for sandbox \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\" successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.501561432Z" level=info msg="StopPodSandbox for \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\" returns successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.502134560Z" level=info msg="RemovePodSandbox for \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.502242996Z" level=info msg="Forcibly stopping sandbox \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.502333652Z" level=info msg="Container to stop \"b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.503885059Z" level=info msg="TearDown network for sandbox \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\" successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.515608308Z" level=info msg="Ensure that sandbox 738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10 in task-service has been cleanup successfully"
	Dec 02 22:04:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:04:37.522158852Z" level=info msg="RemovePodSandbox \"738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10\" returns successfully"
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.443298074Z" level=info msg="container event discarded" container=aad17f876b6c6e87e3d5916c73ce9f467a56e464946423182d5fab0a5d335cb4 type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.458145312Z" level=info msg="container event discarded" container=9eba975ba9273018148902f600b81c3bb7cbf1dfdf0564262429fceabf4db60d type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.469538641Z" level=info msg="container event discarded" container=4f63737248a927d55bf63504cc5759f0399ce470993ebb209ff18091ba946da3 type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.469593475Z" level=info msg="container event discarded" container=c68270fd98dc0f395e81068be83f3b0ee02cec431d9e73b4ea676a0fab7bcd11 type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.486948515Z" level=info msg="container event discarded" container=7b8e4a11867ae8ee6c0199326be895216bcb12f9c07487dc0b6103c36d67822d type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.511243644Z" level=info msg="container event discarded" container=4ec226682569de6a642dc2b0fdd93849257d6317b9ec1458437badd2074a8263 type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.526513148Z" level=info msg="container event discarded" container=b0fd2e79153aa18affb41e5ac4da2c842e6ee8a427632d9779ceadc0e47643b0 type=CONTAINER_DELETED_EVENT
	Dec 02 22:09:37 kubernetes-upgrade-578337 containerd[556]: time="2025-12-02T22:09:37.526565487Z" level=info msg="container event discarded" container=738e5073b22a79eee56ee5b7b7b7dd736b282211224de1ef44613b239f15ae10 type=CONTAINER_DELETED_EVENT
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:12:44 up  3:55,  0 user,  load average: 1.27, 1.59, 1.79
	Linux kubernetes-upgrade-578337 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:12:41 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:12:42 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 22:12:42 kubernetes-upgrade-578337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:42 kubernetes-upgrade-578337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:42 kubernetes-upgrade-578337 kubelet[14112]: E1202 22:12:42.597238   14112 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:12:42 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:12:42 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:12:43 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 22:12:43 kubernetes-upgrade-578337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:43 kubernetes-upgrade-578337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:43 kubernetes-upgrade-578337 kubelet[14207]: E1202 22:12:43.350710   14207 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:12:43 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:12:43 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:44 kubernetes-upgrade-578337 kubelet[14212]: E1202 22:12:44.108825   14212 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:12:44 kubernetes-upgrade-578337 kubelet[14312]: E1202 22:12:44.851486   14312 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:12:44 kubernetes-upgrade-578337 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-578337 -n kubernetes-upgrade-578337
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-578337 -n kubernetes-upgrade-578337: exit status 2 (435.618202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-578337" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-578337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-578337
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-578337: (2.151447885s)
--- FAIL: TestKubernetesUpgrade (789.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (512.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 22:12:50.413787  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m31.114281103s)

                                                
                                                
-- stdout --
	* [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:12:47.818025  510395 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:12:47.818602  510395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:12:47.818657  510395 out.go:374] Setting ErrFile to fd 2...
	I1202 22:12:47.818683  510395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:12:47.819020  510395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:12:47.819642  510395 out.go:368] Setting JSON to false
	I1202 22:12:47.820775  510395 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14106,"bootTime":1764699462,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:12:47.820889  510395 start.go:143] virtualization:  
	I1202 22:12:47.824575  510395 out.go:179] * [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:12:47.829108  510395 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:12:47.829199  510395 notify.go:221] Checking for updates...
	I1202 22:12:47.835980  510395 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:12:47.839404  510395 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:12:47.842649  510395 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:12:47.845806  510395 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:12:47.848943  510395 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:12:47.852616  510395 config.go:182] Loaded profile config "old-k8s-version-996157": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1202 22:12:47.852741  510395 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:12:47.885258  510395 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:12:47.885371  510395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:12:47.954966  510395 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:12:47.945836376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:12:47.955078  510395 docker.go:319] overlay module found
	I1202 22:12:47.960300  510395 out.go:179] * Using the docker driver based on user configuration
	I1202 22:12:47.963273  510395 start.go:309] selected driver: docker
	I1202 22:12:47.963295  510395 start.go:927] validating driver "docker" against <nil>
	I1202 22:12:47.963308  510395 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:12:47.964078  510395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:12:48.035059  510395 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:12:48.025797574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:12:48.035230  510395 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 22:12:48.035475  510395 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:12:48.038509  510395 out.go:179] * Using Docker driver with root privileges
	I1202 22:12:48.041395  510395 cni.go:84] Creating CNI manager for ""
	I1202 22:12:48.041468  510395 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:12:48.041480  510395 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:12:48.041567  510395 start.go:353] cluster config:
	{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:12:48.044815  510395 out.go:179] * Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	I1202 22:12:48.047740  510395 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:12:48.050648  510395 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:12:48.053467  510395 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:12:48.053615  510395 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:12:48.053690  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json: {Name:mk9f9980ab6d8f30f06555ee42a9b687a1164c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:12:48.053889  510395 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:12:48.054164  510395 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054239  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:12:48.054249  510395 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 92.937µs
	I1202 22:12:48.054264  510395 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:12:48.054282  510395 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054318  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:12:48.054323  510395 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 49.09µs
	I1202 22:12:48.054329  510395 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:12:48.054339  510395 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054366  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:12:48.054371  510395 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.411µs
	I1202 22:12:48.054381  510395 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:12:48.054392  510395 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054420  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:12:48.054425  510395 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.296µs
	I1202 22:12:48.054431  510395 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:12:48.054441  510395 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054470  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:12:48.054475  510395 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 36.126µs
	I1202 22:12:48.054480  510395 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:12:48.054490  510395 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054515  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:12:48.054519  510395 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 30.334µs
	I1202 22:12:48.054524  510395 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:12:48.054532  510395 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054557  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:12:48.054561  510395 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 30.013µs
	I1202 22:12:48.054567  510395 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:12:48.054575  510395 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.054599  510395 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:12:48.054603  510395 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.406µs
	I1202 22:12:48.054608  510395 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:12:48.054614  510395 cache.go:87] Successfully saved all images to host disk.
	I1202 22:12:48.072194  510395 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:12:48.072218  510395 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:12:48.072234  510395 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:12:48.072264  510395 start.go:360] acquireMachinesLock for no-preload-904303: {Name:mk2c72bf119f004a39efee961482984889590787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:12:48.072371  510395 start.go:364] duration metric: took 89.417µs to acquireMachinesLock for "no-preload-904303"
	I1202 22:12:48.072403  510395 start.go:93] Provisioning new machine with config: &{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:12:48.072474  510395 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:12:48.077834  510395 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:12:48.078085  510395 start.go:159] libmachine.API.Create for "no-preload-904303" (driver="docker")
	I1202 22:12:48.078124  510395 client.go:173] LocalClient.Create starting
	I1202 22:12:48.078203  510395 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:12:48.078253  510395 main.go:143] libmachine: Decoding PEM data...
	I1202 22:12:48.078273  510395 main.go:143] libmachine: Parsing certificate...
	I1202 22:12:48.078335  510395 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:12:48.078352  510395 main.go:143] libmachine: Decoding PEM data...
	I1202 22:12:48.078364  510395 main.go:143] libmachine: Parsing certificate...
	I1202 22:12:48.078726  510395 cli_runner.go:164] Run: docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:12:48.095135  510395 cli_runner.go:211] docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:12:48.095225  510395 network_create.go:284] running [docker network inspect no-preload-904303] to gather additional debugging logs...
	I1202 22:12:48.095247  510395 cli_runner.go:164] Run: docker network inspect no-preload-904303
	W1202 22:12:48.112368  510395 cli_runner.go:211] docker network inspect no-preload-904303 returned with exit code 1
	I1202 22:12:48.112405  510395 network_create.go:287] error running [docker network inspect no-preload-904303]: docker network inspect no-preload-904303: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-904303 not found
	I1202 22:12:48.112423  510395 network_create.go:289] output of [docker network inspect no-preload-904303]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-904303 not found
	
	** /stderr **
	I1202 22:12:48.112568  510395 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:12:48.129844  510395 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:12:48.130175  510395 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:12:48.137078  510395 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:12:48.137598  510395 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a5700}
	I1202 22:12:48.137622  510395 network_create.go:124] attempt to create docker network no-preload-904303 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1202 22:12:48.137703  510395 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-904303 no-preload-904303
	I1202 22:12:48.199995  510395 network_create.go:108] docker network no-preload-904303 192.168.76.0/24 created
	I1202 22:12:48.200038  510395 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-904303" container
	I1202 22:12:48.200112  510395 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:12:48.217044  510395 cli_runner.go:164] Run: docker volume create no-preload-904303 --label name.minikube.sigs.k8s.io=no-preload-904303 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:12:48.235648  510395 oci.go:103] Successfully created a docker volume no-preload-904303
	I1202 22:12:48.235737  510395 cli_runner.go:164] Run: docker run --rm --name no-preload-904303-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-904303 --entrypoint /usr/bin/test -v no-preload-904303:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:12:48.808235  510395 oci.go:107] Successfully prepared a docker volume no-preload-904303
	I1202 22:12:48.808298  510395 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:12:48.808443  510395 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:12:48.808548  510395 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:12:48.875315  510395 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-904303 --name no-preload-904303 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-904303 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-904303 --network no-preload-904303 --ip 192.168.76.2 --volume no-preload-904303:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:12:49.231664  510395 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Running}}
	I1202 22:12:49.258198  510395 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:12:49.286817  510395 cli_runner.go:164] Run: docker exec no-preload-904303 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:12:49.335369  510395 oci.go:144] the created container "no-preload-904303" has a running status.
	I1202 22:12:49.335435  510395 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa...
	I1202 22:12:49.560200  510395 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:12:49.583660  510395 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:12:49.611510  510395 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:12:49.611529  510395 kic_runner.go:114] Args: [docker exec --privileged no-preload-904303 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:12:49.674817  510395 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:12:49.696947  510395 machine.go:94] provisionDockerMachine start ...
	I1202 22:12:49.697156  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:49.731341  510395 main.go:143] libmachine: Using SSH client type: native
	I1202 22:12:49.731676  510395 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1202 22:12:49.731686  510395 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:12:49.732310  510395 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60574->127.0.0.1:33388: read: connection reset by peer
	I1202 22:12:52.901748  510395 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:12:52.901775  510395 ubuntu.go:182] provisioning hostname "no-preload-904303"
	I1202 22:12:52.901837  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:52.923439  510395 main.go:143] libmachine: Using SSH client type: native
	I1202 22:12:52.923755  510395 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1202 22:12:52.923766  510395 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-904303 && echo "no-preload-904303" | sudo tee /etc/hostname
	I1202 22:12:53.103612  510395 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:12:53.103692  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:53.145126  510395 main.go:143] libmachine: Using SSH client type: native
	I1202 22:12:53.145431  510395 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1202 22:12:53.145447  510395 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-904303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-904303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-904303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:12:53.306097  510395 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:12:53.306126  510395 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:12:53.306148  510395 ubuntu.go:190] setting up certificates
	I1202 22:12:53.306158  510395 provision.go:84] configureAuth start
	I1202 22:12:53.306223  510395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:12:53.329365  510395 provision.go:143] copyHostCerts
	I1202 22:12:53.329429  510395 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:12:53.329443  510395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:12:53.329519  510395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:12:53.329616  510395 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:12:53.329627  510395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:12:53.329720  510395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:12:53.329803  510395 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:12:53.329814  510395 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:12:53.329846  510395 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:12:53.329908  510395 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.no-preload-904303 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-904303]
	I1202 22:12:53.746099  510395 provision.go:177] copyRemoteCerts
	I1202 22:12:53.746167  510395 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:12:53.746208  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:53.765355  510395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:12:53.871941  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:12:53.897175  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:12:53.923943  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:12:53.950205  510395 provision.go:87] duration metric: took 644.022229ms to configureAuth
	I1202 22:12:53.950243  510395 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:12:53.950435  510395 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:12:53.950442  510395 machine.go:97] duration metric: took 4.253463131s to provisionDockerMachine
	I1202 22:12:53.950449  510395 client.go:176] duration metric: took 5.87231446s to LocalClient.Create
	I1202 22:12:53.950464  510395 start.go:167] duration metric: took 5.872380624s to libmachine.API.Create "no-preload-904303"
	I1202 22:12:53.950471  510395 start.go:293] postStartSetup for "no-preload-904303" (driver="docker")
	I1202 22:12:53.950481  510395 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:12:53.950537  510395 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:12:53.950576  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:53.982006  510395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:12:54.097224  510395 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:12:54.101226  510395 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:12:54.101255  510395 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:12:54.101268  510395 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:12:54.101330  510395 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:12:54.101418  510395 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:12:54.101521  510395 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:12:54.111191  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:12:54.130857  510395 start.go:296] duration metric: took 180.371585ms for postStartSetup
	I1202 22:12:54.131234  510395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:12:54.149427  510395 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:12:54.149767  510395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:12:54.149826  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:54.172386  510395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:12:54.283089  510395 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:12:54.288095  510395 start.go:128] duration metric: took 6.215606585s to createHost
	I1202 22:12:54.288120  510395 start.go:83] releasing machines lock for "no-preload-904303", held for 6.215733966s
	I1202 22:12:54.288196  510395 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:12:54.309976  510395 ssh_runner.go:195] Run: cat /version.json
	I1202 22:12:54.310035  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:54.310289  510395 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:12:54.310355  510395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:12:54.356137  510395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:12:54.362137  510395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:12:54.461437  510395 ssh_runner.go:195] Run: systemctl --version
	I1202 22:12:54.581502  510395 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:12:54.586368  510395 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:12:54.586435  510395 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:12:54.616275  510395 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:12:54.616296  510395 start.go:496] detecting cgroup driver to use...
	I1202 22:12:54.616326  510395 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:12:54.616392  510395 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:12:54.636048  510395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:12:54.656069  510395 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:12:54.656148  510395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:12:54.682342  510395 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:12:54.710004  510395 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:12:54.922371  510395 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:12:55.125502  510395 docker.go:234] disabling docker service ...
	I1202 22:12:55.125571  510395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:12:55.177507  510395 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:12:55.205015  510395 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:12:55.424032  510395 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:12:55.733804  510395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:12:55.763074  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:12:55.805175  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:12:55.819355  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:12:55.853539  510395 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:12:55.853619  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:12:55.882277  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:12:55.910115  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:12:55.923681  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:12:55.941446  510395 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:12:55.958404  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:12:55.972123  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:12:55.987479  510395 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:12:55.998080  510395 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:12:56.026787  510395 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:12:56.041083  510395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:12:56.232168  510395 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:12:56.346328  510395 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:12:56.346398  510395 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:12:56.351270  510395 start.go:564] Will wait 60s for crictl version
	I1202 22:12:56.351335  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:56.355671  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:12:56.398594  510395 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:12:56.398660  510395 ssh_runner.go:195] Run: containerd --version
	I1202 22:12:56.429003  510395 ssh_runner.go:195] Run: containerd --version
	I1202 22:12:56.460078  510395 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:12:56.463033  510395 cli_runner.go:164] Run: docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:12:56.481964  510395 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:12:56.486386  510395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:12:56.497002  510395 kubeadm.go:884] updating cluster {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:12:56.497110  510395 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:12:56.497163  510395 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:12:56.524639  510395 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:12:56.524660  510395 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:12:56.524706  510395 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:12:56.524909  510395 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:56.525023  510395 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:56.525112  510395 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:56.525200  510395 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:56.525276  510395 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:12:56.525358  510395 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:56.525434  510395 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:56.528725  510395 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:56.529089  510395 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:12:56.529230  510395 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:56.529341  510395 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:56.529445  510395 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:56.529550  510395 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:12:56.529825  510395 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:56.530062  510395 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:56.836797  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:12:56.836929  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:56.857734  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:12:56.857861  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:56.858578  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:12:56.858674  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:12:56.876685  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:12:56.876801  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:56.879458  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:12:56.879571  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:56.884455  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:12:56.884576  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:56.909442  510395 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:12:56.909569  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:56.924702  510395 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:12:56.924794  510395 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:56.924866  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.022805  510395 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:12:57.022843  510395 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:12:57.022896  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.022948  510395 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:12:57.022963  510395 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:57.022994  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.023064  510395 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:12:57.023080  510395 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:57.023101  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.036656  510395 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:12:57.036695  510395 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:57.036743  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.053024  510395 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:12:57.053118  510395 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:57.053193  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.068041  510395 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:12:57.068166  510395 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:57.068242  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:57.068377  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:57.068510  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:57.068608  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:12:57.068703  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:57.068798  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:57.075740  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:57.253123  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:57.253291  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:57.253382  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:57.253524  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:57.253641  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:57.253785  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:12:57.253965  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:57.398902  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:12:57.399004  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:12:57.399083  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:57.399149  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:12:57.399216  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:12:57.399278  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:12:57.399334  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:12:57.504004  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:12:57.504246  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:12:57.504281  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:12:57.504357  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:12:57.504248  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:12:57.504147  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:12:57.504464  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:12:57.504515  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:12:57.504527  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:12:57.504594  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:12:57.504101  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:12:57.504654  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:12:57.504715  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:12:57.545343  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:12:57.545544  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:12:57.545396  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:12:57.545684  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:12:57.545423  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:12:57.545706  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:12:57.545461  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:12:57.545722  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:12:57.545485  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:12:57.545737  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:12:57.545506  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:12:57.545773  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:12:57.545621  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:12:57.546016  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:12:57.608755  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:12:57.608833  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:12:57.649237  510395 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:12:57.649767  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:12:57.798145  510395 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:12:57.812900  510395 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:12:57.812971  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:12:58.088495  510395 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:12:58.089150  510395 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:12:58.089242  510395 ssh_runner.go:195] Run: which crictl
	I1202 22:12:58.088920  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:12:58.162017  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:12:58.270759  510395 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:12:58.270883  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:12:58.361234  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:13:00.033968  510395 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.672627359s)
	I1202 22:13:00.034332  510395 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:13:00.034763  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.763706086s)
	I1202 22:13:00.034829  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:13:00.034889  510395 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:13:00.034945  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:13:02.320024  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (2.285056309s)
	I1202 22:13:02.320051  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:13:02.320069  510395 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:13:02.320121  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:13:02.320190  510395 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.28584247s)
	I1202 22:13:02.320222  510395 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:13:02.320297  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:13:03.686371  510395 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.366051617s)
	I1202 22:13:03.686407  510395 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:13:03.686432  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:13:03.686501  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.366363239s)
	I1202 22:13:03.686516  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:13:03.686532  510395 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:13:03.686573  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:13:05.228794  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.542193151s)
	I1202 22:13:05.228827  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:13:05.228851  510395 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:13:05.228902  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:13:07.868702  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (2.639773571s)
	I1202 22:13:07.868729  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:13:07.868755  510395 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:13:07.868803  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:13:09.103485  510395 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.234649896s)
	I1202 22:13:09.103510  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:13:09.103525  510395 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:13:09.103573  510395 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:13:09.561328  510395 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:13:09.561358  510395 cache_images.go:125] Successfully loaded all cached images
	I1202 22:13:09.561364  510395 cache_images.go:94] duration metric: took 13.036692382s to LoadCachedImages
	I1202 22:13:09.561376  510395 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:13:09.561465  510395 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-904303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:13:09.561534  510395 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:13:09.588996  510395 cni.go:84] Creating CNI manager for ""
	I1202 22:13:09.589028  510395 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:13:09.589045  510395 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:13:09.589068  510395 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-904303 NodeName:no-preload-904303 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:13:09.589188  510395 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-904303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:13:09.589265  510395 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:13:09.597457  510395 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:13:09.597528  510395 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:13:09.605453  510395 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:13:09.605542  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:13:09.605673  510395 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:13:09.605710  510395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:13:09.605829  510395 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:13:09.605885  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:13:09.610358  510395 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:13:09.610396  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:13:09.628034  510395 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:13:09.628035  510395 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:13:09.628116  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:13:09.640977  510395 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:13:09.641018  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:13:10.424082  510395 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:13:10.432503  510395 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:13:10.446998  510395 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:13:10.459733  510395 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 22:13:10.472904  510395 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:13:10.476446  510395 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:13:10.485870  510395 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:13:10.599058  510395 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:13:10.616455  510395 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303 for IP: 192.168.76.2
	I1202 22:13:10.616476  510395 certs.go:195] generating shared ca certs ...
	I1202 22:13:10.616492  510395 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:10.616639  510395 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:13:10.616687  510395 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:13:10.616709  510395 certs.go:257] generating profile certs ...
	I1202 22:13:10.616766  510395 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key
	I1202 22:13:10.616785  510395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.crt with IP's: []
	I1202 22:13:11.203758  510395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.crt ...
	I1202 22:13:11.203832  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.crt: {Name:mkd798032cd94963a0a36a0f993d14182cc73049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.204079  510395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key ...
	I1202 22:13:11.204120  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key: {Name:mk6211fd2234bfd6fcab47d3ca9896414da09f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.204271  510395 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d
	I1202 22:13:11.204318  510395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt.c0dba49d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1202 22:13:11.318391  510395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt.c0dba49d ...
	I1202 22:13:11.318477  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt.c0dba49d: {Name:mkef3c4556e2a2f0413f31238b7002cb9e46781d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.318672  510395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d ...
	I1202 22:13:11.318721  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d: {Name:mkc2a0cac5a8075b8979586e65af07bbfb3b82e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.318836  510395 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt.c0dba49d -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt
	I1202 22:13:11.318960  510395 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key
	I1202 22:13:11.319046  510395 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key
	I1202 22:13:11.319088  510395 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt with IP's: []
	I1202 22:13:11.414014  510395 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt ...
	I1202 22:13:11.414087  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt: {Name:mkd134d2d8c697111848fd4ec08fbc7f820edaff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.414283  510395 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key ...
	I1202 22:13:11.414327  510395 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key: {Name:mkc3274f1fe4f313bc34523d76c43ca5e4cc8fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:13:11.414551  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:13:11.414633  510395 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:13:11.414661  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:13:11.414724  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:13:11.414772  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:13:11.414824  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:13:11.414896  510395 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:13:11.415544  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:13:11.434959  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:13:11.471246  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:13:11.488791  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:13:11.505839  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:13:11.523621  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:13:11.551111  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:13:11.603271  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:13:11.623718  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:13:11.644167  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:13:11.668383  510395 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:13:11.688669  510395 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:13:11.702466  510395 ssh_runner.go:195] Run: openssl version
	I1202 22:13:11.709029  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:13:11.717348  510395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:13:11.721520  510395 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:13:11.721597  510395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:13:11.766332  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:13:11.775335  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:13:11.784531  510395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:13:11.789376  510395 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:13:11.789454  510395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:13:11.831374  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:13:11.840570  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:13:11.849887  510395 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:13:11.854338  510395 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:13:11.854413  510395 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:13:11.898738  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:13:11.907445  510395 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:13:11.910852  510395 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:13:11.910907  510395 kubeadm.go:401] StartCluster: {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:13:11.911360  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:13:11.911435  510395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:13:11.946458  510395 cri.go:89] found id: ""
	I1202 22:13:11.946531  510395 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:13:11.955886  510395 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:13:11.963704  510395 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:13:11.963772  510395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:13:11.971898  510395 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:13:11.971916  510395 kubeadm.go:158] found existing configuration files:
	
	I1202 22:13:11.971965  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:13:11.979717  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:13:11.979798  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:13:11.988285  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:13:11.997456  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:13:11.997522  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:13:12.006297  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:13:12.017071  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:13:12.017144  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:13:12.027914  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:13:12.039918  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:13:12.039988  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:13:12.049289  510395 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:13:12.121204  510395 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:13:12.121696  510395 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:13:12.226078  510395 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:13:12.226267  510395 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:13:12.226335  510395 kubeadm.go:319] OS: Linux
	I1202 22:13:12.226424  510395 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:13:12.226550  510395 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:13:12.226644  510395 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:13:12.226737  510395 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:13:12.226829  510395 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:13:12.226954  510395 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:13:12.227039  510395 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:13:12.227131  510395 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:13:12.227220  510395 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:13:12.325845  510395 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:13:12.326048  510395 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:13:12.326145  510395 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:13:12.341405  510395 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:13:12.348602  510395 out.go:252]   - Generating certificates and keys ...
	I1202 22:13:12.348708  510395 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:13:12.348784  510395 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:13:12.590999  510395 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:13:12.798701  510395 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:13:12.987687  510395 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:13:13.154337  510395 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:13:13.530079  510395 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:13:13.530615  510395 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 22:13:13.753706  510395 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:13:13.754268  510395 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1202 22:13:13.924902  510395 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:13:14.411353  510395 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:13:14.462554  510395 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:13:14.463005  510395 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:13:14.861845  510395 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:13:14.989813  510395 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:13:15.273625  510395 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:13:15.396735  510395 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:13:15.947320  510395 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:13:15.947422  510395 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:13:15.954199  510395 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:13:15.958459  510395 out.go:252]   - Booting up control plane ...
	I1202 22:13:15.958564  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:13:15.958648  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:13:15.967939  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:13:15.984211  510395 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:13:15.984327  510395 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:13:15.993321  510395 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:13:15.993428  510395 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:13:15.993473  510395 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:13:16.166120  510395 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:13:16.166254  510395 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:17:16.166065  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000442446s
	I1202 22:17:16.166104  510395 kubeadm.go:319] 
	I1202 22:17:16.166172  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:17:16.166216  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:17:16.166735  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:17:16.166765  510395 kubeadm.go:319] 
	I1202 22:17:16.167222  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:17:16.167292  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:17:16.167349  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:17:16.167355  510395 kubeadm.go:319] 
	I1202 22:17:16.178990  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:17:16.179486  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:17:16.179605  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:17:16.179872  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:17:16.179888  510395 kubeadm.go:319] 
	I1202 22:17:16.179958  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 22:17:16.180072  510395 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000442446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-904303] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000442446s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 22:17:16.180158  510395 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 22:17:16.588239  510395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:17:16.602695  510395 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:17:16.602801  510395 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:17:16.610594  510395 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:17:16.610616  510395 kubeadm.go:158] found existing configuration files:
	
	I1202 22:17:16.610666  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:17:16.618276  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:17:16.618381  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:17:16.625883  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:17:16.633609  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:17:16.633712  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:17:16.640850  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:17:16.648766  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:17:16.648854  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:17:16.656167  510395 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:17:16.663836  510395 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:17:16.663898  510395 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:17:16.671249  510395 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:17:16.713260  510395 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:17:16.713633  510395 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:17:16.784840  510395 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:17:16.784960  510395 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:17:16.785004  510395 kubeadm.go:319] OS: Linux
	I1202 22:17:16.785065  510395 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:17:16.785118  510395 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:17:16.785169  510395 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:17:16.785220  510395 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:17:16.785272  510395 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:17:16.785323  510395 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:17:16.785372  510395 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:17:16.785422  510395 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:17:16.785472  510395 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:17:16.850269  510395 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:17:16.850702  510395 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:17:16.850846  510395 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:17:16.862176  510395 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:17:16.867574  510395 out.go:252]   - Generating certificates and keys ...
	I1202 22:17:16.867676  510395 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:17:16.867742  510395 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:17:16.867818  510395 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:17:16.867879  510395 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:17:16.867948  510395 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:17:16.868003  510395 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:17:16.868066  510395 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:17:16.868127  510395 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:17:16.868201  510395 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:17:16.868274  510395 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:17:16.868311  510395 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:17:16.868367  510395 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:17:17.080984  510395 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:17:17.327002  510395 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:17:17.613742  510395 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:17:17.750034  510395 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:17:18.225255  510395 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:17:18.225941  510395 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:17:18.228525  510395 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:17:18.231779  510395 out.go:252]   - Booting up control plane ...
	I1202 22:17:18.231885  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:17:18.231986  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:17:18.233479  510395 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:17:18.257168  510395 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:17:18.257334  510395 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:17:18.265317  510395 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:17:18.265672  510395 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:17:18.265935  510395 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:17:18.401049  510395 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:17:18.401173  510395 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:21:18.402255  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00114361s
	I1202 22:21:18.402290  510395 kubeadm.go:319] 
	I1202 22:21:18.402400  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:21:18.402462  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:21:18.403019  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:21:18.403038  510395 kubeadm.go:319] 
	I1202 22:21:18.403228  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:21:18.403293  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:21:18.403358  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:21:18.403371  510395 kubeadm.go:319] 
	I1202 22:21:18.408627  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:21:18.409060  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:21:18.409175  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:21:18.409412  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:21:18.409421  510395 kubeadm.go:319] 
	I1202 22:21:18.409510  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:21:18.409568  510395 kubeadm.go:403] duration metric: took 8m6.498664339s to StartCluster
	I1202 22:21:18.409608  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:21:18.409703  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:21:18.433892  510395 cri.go:89] found id: ""
	I1202 22:21:18.433920  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.433929  510395 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:21:18.433935  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:21:18.433997  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:21:18.458135  510395 cri.go:89] found id: ""
	I1202 22:21:18.458168  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.458177  510395 logs.go:284] No container was found matching "etcd"
	I1202 22:21:18.458184  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:21:18.458251  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:21:18.487703  510395 cri.go:89] found id: ""
	I1202 22:21:18.487726  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.487735  510395 logs.go:284] No container was found matching "coredns"
	I1202 22:21:18.487742  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:21:18.487825  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:21:18.511734  510395 cri.go:89] found id: ""
	I1202 22:21:18.511757  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.511766  510395 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:21:18.511773  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:21:18.511833  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:21:18.535676  510395 cri.go:89] found id: ""
	I1202 22:21:18.535701  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.535710  510395 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:21:18.535717  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:21:18.535778  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:21:18.608686  510395 cri.go:89] found id: ""
	I1202 22:21:18.608714  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.608733  510395 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:21:18.608740  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:21:18.608810  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:21:18.636332  510395 cri.go:89] found id: ""
	I1202 22:21:18.636357  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.636366  510395 logs.go:284] No container was found matching "kindnet"
	I1202 22:21:18.636377  510395 logs.go:123] Gathering logs for container status ...
	I1202 22:21:18.636389  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:21:18.666396  510395 logs.go:123] Gathering logs for kubelet ...
	I1202 22:21:18.666423  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:21:18.724901  510395 logs.go:123] Gathering logs for dmesg ...
	I1202 22:21:18.724937  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:21:18.740835  510395 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:21:18.740863  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:21:18.806977  510395 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:21:18.806999  510395 logs.go:123] Gathering logs for containerd ...
	I1202 22:21:18.807011  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1202 22:21:18.849559  510395 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:21:18.849624  510395 out.go:285] * 
	* 
	W1202 22:21:18.849687  510395 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.849738  510395 out.go:285] * 
	* 
	W1202 22:21:18.851859  510395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:21:18.857885  510395 out.go:203] 
	W1202 22:21:18.861761  510395 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.861806  510395 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:21:18.861829  510395 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:21:18.865566  510395 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:12:48.960673074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2565d89b5b0cac53d37704e84ed068e1e8f9fea06698cfb7e3bf5fa82431969c",
	            "SandboxKey": "/var/run/docker/netns/2565d89b5b0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ce:be:b1:c3:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "36cc446e2b4667656204614f2648dd0b57c6c026ff3e894f2ade69f763222166",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 6 (327.806206ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:19.281250  536188 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
E1202 22:21:19.595241  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-996157 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:17:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:17:44.404531  530747 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:17:44.404670  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.404684  530747 out.go:374] Setting ErrFile to fd 2...
	I1202 22:17:44.404690  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.405094  530747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:17:44.405637  530747 out.go:368] Setting JSON to false
	I1202 22:17:44.406740  530747 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14403,"bootTime":1764699462,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:17:44.406830  530747 start.go:143] virtualization:  
	I1202 22:17:44.410982  530747 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:17:44.415278  530747 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:17:44.415454  530747 notify.go:221] Checking for updates...
	I1202 22:17:44.421699  530747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:17:44.424811  530747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:17:44.427830  530747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:17:44.430886  530747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:17:44.433744  530747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:17:44.437092  530747 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:44.437182  530747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:17:44.470020  530747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:17:44.470192  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.532667  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.522777992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.532772  530747 docker.go:319] overlay module found
	I1202 22:17:44.536064  530747 out.go:179] * Using the docker driver based on user configuration
	I1202 22:17:44.538930  530747 start.go:309] selected driver: docker
	I1202 22:17:44.538949  530747 start.go:927] validating driver "docker" against <nil>
	I1202 22:17:44.538963  530747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:17:44.539711  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.592441  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.583215128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.592603  530747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 22:17:44.592632  530747 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 22:17:44.592854  530747 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:17:44.595765  530747 out.go:179] * Using Docker driver with root privileges
	I1202 22:17:44.598585  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:17:44.598655  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:17:44.598670  530747 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:17:44.598768  530747 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:17:44.601883  530747 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:17:44.604845  530747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:17:44.607693  530747 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:17:44.610530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:44.610603  530747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:17:44.634404  530747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:17:44.634428  530747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:17:44.673223  530747 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:17:44.860204  530747 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:17:44.860409  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:44.860446  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json: {Name:mk97b8ae8c3d085bfd853be8a3ae939898e326ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:17:44.860450  530747 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860569  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:17:44.860579  530747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.118µs
	I1202 22:17:44.860592  530747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:17:44.860605  530747 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860619  530747 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:17:44.860635  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:17:44.860641  530747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.841µs
	I1202 22:17:44.860647  530747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860645  530747 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860657  530747 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860684  530747 start.go:364] duration metric: took 30.103µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:17:44.860691  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:17:44.860696  530747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.401µs
	I1202 22:17:44.860705  530747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860715  530747 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860742  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:17:44.860702  530747 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:17:44.860748  530747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.394µs
	I1202 22:17:44.860757  530747 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:17:44.860763  530747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860772  530747 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860797  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:17:44.860802  530747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 22:17:44.860818  530747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860827  530747 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860853  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:17:44.860858  530747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.933µs
	I1202 22:17:44.860886  530747 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860912  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:17:44.860917  530747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.835µs
	I1202 22:17:44.860923  530747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:17:44.860864  530747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:17:44.860872  530747 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.861203  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:17:44.861213  530747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 340.707µs
	I1202 22:17:44.861221  530747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:17:44.861233  530747 cache.go:87] Successfully saved all images to host disk.
	I1202 22:17:44.866001  530747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:17:44.866290  530747 start.go:159] libmachine.API.Create for "newest-cni-250247" (driver="docker")
	I1202 22:17:44.866351  530747 client.go:173] LocalClient.Create starting
	I1202 22:17:44.866422  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:17:44.866458  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866484  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866546  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:17:44.866568  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866582  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866956  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:17:44.882056  530747 cli_runner.go:211] docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:17:44.882136  530747 network_create.go:284] running [docker network inspect newest-cni-250247] to gather additional debugging logs...
	I1202 22:17:44.882156  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247
	W1202 22:17:44.900735  530747 cli_runner.go:211] docker network inspect newest-cni-250247 returned with exit code 1
	I1202 22:17:44.900767  530747 network_create.go:287] error running [docker network inspect newest-cni-250247]: docker network inspect newest-cni-250247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250247 not found
	I1202 22:17:44.900798  530747 network_create.go:289] output of [docker network inspect newest-cni-250247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250247 not found
	
	** /stderr **
	I1202 22:17:44.900897  530747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:44.919285  530747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:17:44.919603  530747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:17:44.919927  530747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:17:44.920175  530747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:17:44.920559  530747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad4100}
	I1202 22:17:44.920582  530747 network_create.go:124] attempt to create docker network newest-cni-250247 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:17:44.920648  530747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250247 newest-cni-250247
	I1202 22:17:44.974998  530747 network_create.go:108] docker network newest-cni-250247 192.168.85.0/24 created
	I1202 22:17:44.975031  530747 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-250247" container
	I1202 22:17:44.975103  530747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:17:44.990787  530747 cli_runner.go:164] Run: docker volume create newest-cni-250247 --label name.minikube.sigs.k8s.io=newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:17:45.013271  530747 oci.go:103] Successfully created a docker volume newest-cni-250247
	I1202 22:17:45.013406  530747 cli_runner.go:164] Run: docker run --rm --name newest-cni-250247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --entrypoint /usr/bin/test -v newest-cni-250247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:17:45.621475  530747 oci.go:107] Successfully prepared a docker volume newest-cni-250247
	I1202 22:17:45.621530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:17:45.621683  530747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:17:45.621835  530747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:17:45.678934  530747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250247 --name newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250247 --network newest-cni-250247 --ip 192.168.85.2 --volume newest-cni-250247:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:17:45.981381  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Running}}
	I1202 22:17:46.003380  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.032446  530747 cli_runner.go:164] Run: docker exec newest-cni-250247 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:17:46.085911  530747 oci.go:144] the created container "newest-cni-250247" has a running status.
	I1202 22:17:46.085938  530747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa...
	I1202 22:17:46.535806  530747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:17:46.556856  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.574862  530747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:17:46.574884  530747 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:17:46.613075  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.633142  530747 machine.go:94] provisionDockerMachine start ...
	I1202 22:17:46.633247  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:46.649612  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:46.649974  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:46.649985  530747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:17:46.650582  530747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52268->127.0.0.1:33413: read: connection reset by peer
	I1202 22:17:49.801182  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.801207  530747 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:17:49.801275  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.819629  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.819939  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.819956  530747 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:17:49.975371  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.975485  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.993473  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.993863  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.993890  530747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:17:50.158635  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:17:50.158664  530747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:17:50.158693  530747 ubuntu.go:190] setting up certificates
	I1202 22:17:50.158702  530747 provision.go:84] configureAuth start
	I1202 22:17:50.158761  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.179744  530747 provision.go:143] copyHostCerts
	I1202 22:17:50.179822  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:17:50.179837  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:17:50.179915  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:17:50.180033  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:17:50.180044  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:17:50.180070  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:17:50.180125  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:17:50.180135  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:17:50.180158  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:17:50.180217  530747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:17:50.322891  530747 provision.go:177] copyRemoteCerts
	I1202 22:17:50.322959  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:17:50.323002  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.340979  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.445218  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:17:50.462694  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:17:50.479627  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:17:50.496822  530747 provision.go:87] duration metric: took 338.097427ms to configureAuth
	I1202 22:17:50.496849  530747 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:17:50.497071  530747 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:50.497084  530747 machine.go:97] duration metric: took 3.863925712s to provisionDockerMachine
	I1202 22:17:50.497092  530747 client.go:176] duration metric: took 5.630731537s to LocalClient.Create
	I1202 22:17:50.497116  530747 start.go:167] duration metric: took 5.630827551s to libmachine.API.Create "newest-cni-250247"
	I1202 22:17:50.497128  530747 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:17:50.497139  530747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:17:50.497194  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:17:50.497239  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.514214  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.617428  530747 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:17:50.620641  530747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:17:50.620667  530747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:17:50.620679  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:17:50.620732  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:17:50.620820  530747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:17:50.620924  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:17:50.628593  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:17:50.645552  530747 start.go:296] duration metric: took 148.408487ms for postStartSetup
	I1202 22:17:50.646009  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.663355  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:50.663625  530747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:17:50.663682  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.680676  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.782247  530747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:17:50.786667  530747 start.go:128] duration metric: took 5.925896109s to createHost
	I1202 22:17:50.786693  530747 start.go:83] releasing machines lock for "newest-cni-250247", held for 5.926000968s
	I1202 22:17:50.786793  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.811296  530747 ssh_runner.go:195] Run: cat /version.json
	I1202 22:17:50.811346  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.811581  530747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:17:50.811637  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.845065  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.848995  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.954487  530747 ssh_runner.go:195] Run: systemctl --version
	I1202 22:17:51.041838  530747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:17:51.046274  530747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:17:51.046341  530747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:17:51.074410  530747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:17:51.074488  530747 start.go:496] detecting cgroup driver to use...
	I1202 22:17:51.074529  530747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:17:51.074589  530747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:17:51.089808  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:17:51.103374  530747 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:17:51.103448  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:17:51.121828  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:17:51.141789  530747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:17:51.261866  530747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:17:51.373962  530747 docker.go:234] disabling docker service ...
	I1202 22:17:51.374058  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:17:51.395721  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:17:51.408889  530747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:17:51.526784  530747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:17:51.666659  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:17:51.680421  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:17:51.694156  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:17:51.703246  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:17:51.711965  530747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:17:51.712032  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:17:51.720534  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.729291  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:17:51.737871  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.746530  530747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:17:51.754381  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:17:51.763140  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:17:51.771794  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:17:51.780775  530747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:17:51.788502  530747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:17:51.796157  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:17:51.902580  530747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:17:51.980987  530747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:17:51.981071  530747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:17:51.985388  530747 start.go:564] Will wait 60s for crictl version
	I1202 22:17:51.985466  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:51.989692  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:17:52.016719  530747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:17:52.016798  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.038219  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.064913  530747 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:17:52.067896  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:52.085835  530747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:17:52.089730  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:17:52.103887  530747 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:17:52.106903  530747 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:17:52.107043  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:52.107129  530747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:17:52.134576  530747 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:17:52.134599  530747 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:17:52.134654  530747 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.134875  530747 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.135006  530747 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.135100  530747 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.135208  530747 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.135311  530747 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.135412  530747 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.135504  530747 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.137073  530747 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.137501  530747 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.138001  530747 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.138176  530747 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.138332  530747 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.138460  530747 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.138578  530747 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.138691  530747 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.486863  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:17:52.486989  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.505095  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:17:52.505220  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.506565  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:17:52.506632  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.507855  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:17:52.507959  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:17:52.511007  530747 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:17:52.511097  530747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.511158  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.532242  530747 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:17:52.532340  530747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.532411  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.548827  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:17:52.548944  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.555163  530747 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:17:52.555232  530747 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.555287  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555371  530747 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:17:52.555414  530747 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.555456  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555558  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.555663  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.571785  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:17:52.571882  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.572308  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:17:52.572386  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.584466  530747 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:17:52.584539  530747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.584606  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.623084  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.623169  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.623195  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.623234  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.623260  530747 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:17:52.623496  530747 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.623551  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626466  530747 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:17:52.626537  530747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.626592  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626678  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.708388  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.708492  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.708522  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.708744  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.708784  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.708846  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.708845  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808693  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808697  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808758  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.808797  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808930  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.808998  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809059  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809112  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.819994  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.890157  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890191  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:17:52.890263  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890334  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890389  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.890452  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:17:52.890493  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:52.890543  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.890584  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890594  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:17:52.890633  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:17:52.890674  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:52.959924  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:17:52.959968  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:17:52.960046  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:17:52.960065  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:17:52.960114  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960195  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960258  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:17:52.960308  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:52.960375  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.960392  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:17:53.029868  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:17:53.030286  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:17:53.030052  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:17:53.030412  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:17:53.145803  530747 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:53.145876  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:17:53.398719  530747 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:17:53.398959  530747 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:17:53.399035  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.477986  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:17:53.503645  530747 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:17:53.503712  530747 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.503778  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:53.528168  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.528256  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.576028  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770595  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.242308714s)
	I1202 22:17:54.770671  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:17:54.770607  530747 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.194545785s)
	I1202 22:17:54.770798  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770711  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:54.770887  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:55.659136  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:17:55.659166  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659215  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659308  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:56.632721  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:17:56.632828  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:17:56.632885  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:17:56.632905  530747 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:56.632929  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:57.649982  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.017026977s)
	I1202 22:17:57.650005  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:17:57.650033  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650080  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650148  530747 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.017305696s)
	I1202 22:17:57.650163  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:17:57.650176  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:17:58.684406  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.034301571s)
	I1202 22:17:58.684477  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:17:58.684519  530747 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:58.684597  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:18:00.238431  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.553804859s)
	I1202 22:18:00.238459  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:18:00.238485  530747 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.238539  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.740937  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:18:00.741026  530747 cache_images.go:125] Successfully loaded all cached images
	I1202 22:18:00.741044  530747 cache_images.go:94] duration metric: took 8.60643049s to LoadCachedImages
	I1202 22:18:00.741063  530747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:18:00.741200  530747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:18:00.741276  530747 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:18:00.765139  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:18:00.765169  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:18:00.765188  530747 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:18:00.765212  530747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:18:00.765326  530747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:18:00.765433  530747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.773193  530747 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:18:00.773260  530747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.780940  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:18:00.781025  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:18:00.781114  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:18:00.781149  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:18:00.781235  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:18:00.781291  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:18:00.788716  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:18:00.788795  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:18:00.803910  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:18:00.804008  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:18:00.804025  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:18:00.816846  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:18:00.816880  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:18:01.602456  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:18:01.610653  530747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:18:01.624603  530747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:18:01.638373  530747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:18:01.652151  530747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:18:01.656237  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:18:01.666754  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:18:01.784445  530747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:18:01.807464  530747 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:18:01.807485  530747 certs.go:195] generating shared ca certs ...
	I1202 22:18:01.807504  530747 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.807689  530747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:18:01.807752  530747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:18:01.807763  530747 certs.go:257] generating profile certs ...
	I1202 22:18:01.807833  530747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:18:01.807852  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt with IP's: []
	I1202 22:18:01.904440  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt ...
	I1202 22:18:01.904514  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt: {Name:mkac1ba94fca76c17ef6889ccac434c85c3adfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904734  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key ...
	I1202 22:18:01.904773  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key: {Name:mk0c9426196191d76ac8bad3e60a1b42170fc3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904915  530747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:18:01.904963  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:18:02.273695  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde ...
	I1202 22:18:02.273733  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde: {Name:mk485afb3918fbbfcd9c10c46151672750ef52be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.273936  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde ...
	I1202 22:18:02.273952  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde: {Name:mk04b6e3543cdc0fbe6b60437820e2294d1297d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.274073  530747 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt
	I1202 22:18:02.274162  530747 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key
	I1202 22:18:02.274234  530747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:18:02.274255  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt with IP's: []
	I1202 22:18:02.649970  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt ...
	I1202 22:18:02.650005  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt: {Name:mk19f12624bf230a68d68951d2c42662a58d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650189  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key ...
	I1202 22:18:02.650212  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key: {Name:mk073d5c6ce4db6564bbfc911588b213e2c9f7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650417  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:18:02.650468  530747 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:18:02.650481  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:18:02.650512  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:18:02.650542  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:18:02.650565  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:18:02.650614  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:18:02.651161  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:18:02.670316  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:18:02.688640  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:18:02.706448  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:18:02.724635  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:18:02.741956  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:18:02.758769  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:18:02.776648  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:18:02.800104  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:18:02.824626  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:18:02.842845  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:18:02.859809  530747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:18:02.872445  530747 ssh_runner.go:195] Run: openssl version
	I1202 22:18:02.878854  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:18:02.887208  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891035  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891111  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.931750  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:18:02.940012  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:18:02.948063  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951902  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951976  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.992654  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:18:03.001115  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:18:03.011822  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016313  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016431  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.057849  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:18:03.066486  530747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:18:03.070494  530747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:18:03.070546  530747 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:18:03.070624  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:18:03.070686  530747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:18:03.097552  530747 cri.go:89] found id: ""
	I1202 22:18:03.097695  530747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:18:03.105804  530747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:18:03.114013  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:18:03.114153  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:18:03.122166  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:18:03.122190  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:18:03.122266  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:18:03.130248  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:18:03.130314  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:18:03.137979  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:18:03.146142  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:18:03.146218  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:18:03.153915  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.162129  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:18:03.162264  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.170014  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:18:03.178190  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:18:03.178275  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:18:03.185714  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:18:03.223736  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:18:03.223941  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:18:03.308311  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:18:03.308389  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:18:03.308430  530747 kubeadm.go:319] OS: Linux
	I1202 22:18:03.308479  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:18:03.308531  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:18:03.308591  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:18:03.308643  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:18:03.308698  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:18:03.308750  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:18:03.308799  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:18:03.308851  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:18:03.308901  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:18:03.374665  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:18:03.374792  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:18:03.374887  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:18:03.388049  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:18:03.397156  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:18:03.397267  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:18:03.397355  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:18:03.624812  530747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:18:03.988647  530747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:18:04.207719  530747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:18:04.369148  530747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:18:04.533091  530747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:18:04.533470  530747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:04.781495  530747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:18:04.781896  530747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:05.055068  530747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:18:05.269007  530747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:18:05.339371  530747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:18:05.339621  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:18:05.517146  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:18:05.863539  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:18:06.326882  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:18:06.463358  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:18:06.983101  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:18:06.983766  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:18:06.989546  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:18:06.998787  530747 out.go:252]   - Booting up control plane ...
	I1202 22:18:06.998960  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:18:06.999088  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:18:06.999437  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:18:07.023294  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:18:07.023746  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:18:07.031501  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:18:07.032911  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:18:07.033179  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:18:07.172448  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:18:07.172569  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:21:18.402255  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00114361s
	I1202 22:21:18.402290  510395 kubeadm.go:319] 
	I1202 22:21:18.402400  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:21:18.402462  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:21:18.403019  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:21:18.403038  510395 kubeadm.go:319] 
	I1202 22:21:18.403228  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:21:18.403293  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:21:18.403358  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:21:18.403371  510395 kubeadm.go:319] 
	I1202 22:21:18.408627  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:21:18.409060  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:21:18.409175  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:21:18.409412  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:21:18.409421  510395 kubeadm.go:319] 
	I1202 22:21:18.409510  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:21:18.409568  510395 kubeadm.go:403] duration metric: took 8m6.498664339s to StartCluster
	I1202 22:21:18.409608  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:21:18.409703  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:21:18.433892  510395 cri.go:89] found id: ""
	I1202 22:21:18.433920  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.433929  510395 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:21:18.433935  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:21:18.433997  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:21:18.458135  510395 cri.go:89] found id: ""
	I1202 22:21:18.458168  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.458177  510395 logs.go:284] No container was found matching "etcd"
	I1202 22:21:18.458184  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:21:18.458251  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:21:18.487703  510395 cri.go:89] found id: ""
	I1202 22:21:18.487726  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.487735  510395 logs.go:284] No container was found matching "coredns"
	I1202 22:21:18.487742  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:21:18.487825  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:21:18.511734  510395 cri.go:89] found id: ""
	I1202 22:21:18.511757  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.511766  510395 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:21:18.511773  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:21:18.511833  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:21:18.535676  510395 cri.go:89] found id: ""
	I1202 22:21:18.535701  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.535710  510395 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:21:18.535717  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:21:18.535778  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:21:18.608686  510395 cri.go:89] found id: ""
	I1202 22:21:18.608714  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.608733  510395 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:21:18.608740  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:21:18.608810  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:21:18.636332  510395 cri.go:89] found id: ""
	I1202 22:21:18.636357  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.636366  510395 logs.go:284] No container was found matching "kindnet"
	I1202 22:21:18.636377  510395 logs.go:123] Gathering logs for container status ...
	I1202 22:21:18.636389  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:21:18.666396  510395 logs.go:123] Gathering logs for kubelet ...
	I1202 22:21:18.666423  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:21:18.724901  510395 logs.go:123] Gathering logs for dmesg ...
	I1202 22:21:18.724937  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:21:18.740835  510395 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:21:18.740863  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:21:18.806977  510395 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:21:18.806999  510395 logs.go:123] Gathering logs for containerd ...
	I1202 22:21:18.807011  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1202 22:21:18.849559  510395 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:21:18.849624  510395 out.go:285] * 
	W1202 22:21:18.849687  510395 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.849738  510395 out.go:285] * 
	W1202 22:21:18.851859  510395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:21:18.857885  510395 out.go:203] 
	W1202 22:21:18.861761  510395 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.861806  510395 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:21:18.861829  510395 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:21:18.865566  510395 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:13:00 no-preload-904303 containerd[758]: time="2025-12-02T22:13:00.056487043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.309327150Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.311580305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.319050815Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.320996139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.675616176Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.678085817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698352226Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698661001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.162074992Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.178922518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.243438144Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.244266609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.858893547Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.861151683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.869754107Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.870787170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.086495518Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.094498321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.103914819Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.106075735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.551856390Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.553967518Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561245033Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561946477Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:19.935248    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:19.935643    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:19.937269    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:19.937779    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:19.939309    5540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:21:19 up  4:03,  0 user,  load average: 0.18, 0.98, 1.50
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:21:16 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:17 no-preload-904303 kubelet[5343]: E1202 22:21:17.102658    5343 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:17 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:17 no-preload-904303 kubelet[5349]: E1202 22:21:17.843444    5349 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:17 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:18 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 22:21:18 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:18 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:18 no-preload-904303 kubelet[5402]: E1202 22:21:18.625591    5402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:18 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:18 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:19 no-preload-904303 kubelet[5456]: E1202 22:21:19.394195    5456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 6 (354.134418ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:20.442916  536416 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (512.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (507.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 22:17:50.414371  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:17:50.594564  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:19:12.516048  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:19:44.123048  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:20:39.589195  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.026323  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.032844  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.044224  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.065687  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.107133  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.188521  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.350113  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:17.671792  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:18.313842  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m25.399025808s)

                                                
                                                
-- stdout --
	* [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:17:44.404531  530747 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:17:44.404670  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.404684  530747 out.go:374] Setting ErrFile to fd 2...
	I1202 22:17:44.404690  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.405094  530747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:17:44.405637  530747 out.go:368] Setting JSON to false
	I1202 22:17:44.406740  530747 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14403,"bootTime":1764699462,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:17:44.406830  530747 start.go:143] virtualization:  
	I1202 22:17:44.410982  530747 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:17:44.415278  530747 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:17:44.415454  530747 notify.go:221] Checking for updates...
	I1202 22:17:44.421699  530747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:17:44.424811  530747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:17:44.427830  530747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:17:44.430886  530747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:17:44.433744  530747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:17:44.437092  530747 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:44.437182  530747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:17:44.470020  530747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:17:44.470192  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.532667  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.522777992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.532772  530747 docker.go:319] overlay module found
	I1202 22:17:44.536064  530747 out.go:179] * Using the docker driver based on user configuration
	I1202 22:17:44.538930  530747 start.go:309] selected driver: docker
	I1202 22:17:44.538949  530747 start.go:927] validating driver "docker" against <nil>
	I1202 22:17:44.538963  530747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:17:44.539711  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.592441  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.583215128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.592603  530747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 22:17:44.592632  530747 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 22:17:44.592854  530747 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:17:44.595765  530747 out.go:179] * Using Docker driver with root privileges
	I1202 22:17:44.598585  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:17:44.598655  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:17:44.598670  530747 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:17:44.598768  530747 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:17:44.601883  530747 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:17:44.604845  530747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:17:44.607693  530747 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:17:44.610530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:44.610603  530747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:17:44.634404  530747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:17:44.634428  530747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:17:44.673223  530747 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:17:44.860204  530747 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:17:44.860409  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:44.860446  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json: {Name:mk97b8ae8c3d085bfd853be8a3ae939898e326ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:17:44.860450  530747 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860569  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:17:44.860579  530747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.118µs
	I1202 22:17:44.860592  530747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:17:44.860605  530747 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860619  530747 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:17:44.860635  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:17:44.860641  530747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.841µs
	I1202 22:17:44.860647  530747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860645  530747 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860657  530747 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860684  530747 start.go:364] duration metric: took 30.103µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:17:44.860691  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:17:44.860696  530747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.401µs
	I1202 22:17:44.860705  530747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860715  530747 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860742  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:17:44.860702  530747 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:17:44.860748  530747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.394µs
	I1202 22:17:44.860757  530747 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:17:44.860763  530747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860772  530747 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860797  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:17:44.860802  530747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 22:17:44.860818  530747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860827  530747 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860853  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:17:44.860858  530747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.933µs
	I1202 22:17:44.860886  530747 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860912  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:17:44.860917  530747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.835µs
	I1202 22:17:44.860923  530747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:17:44.860864  530747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:17:44.860872  530747 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.861203  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:17:44.861213  530747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 340.707µs
	I1202 22:17:44.861221  530747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:17:44.861233  530747 cache.go:87] Successfully saved all images to host disk.
	I1202 22:17:44.866001  530747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:17:44.866290  530747 start.go:159] libmachine.API.Create for "newest-cni-250247" (driver="docker")
	I1202 22:17:44.866351  530747 client.go:173] LocalClient.Create starting
	I1202 22:17:44.866422  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:17:44.866458  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866484  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866546  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:17:44.866568  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866582  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866956  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:17:44.882056  530747 cli_runner.go:211] docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:17:44.882136  530747 network_create.go:284] running [docker network inspect newest-cni-250247] to gather additional debugging logs...
	I1202 22:17:44.882156  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247
	W1202 22:17:44.900735  530747 cli_runner.go:211] docker network inspect newest-cni-250247 returned with exit code 1
	I1202 22:17:44.900767  530747 network_create.go:287] error running [docker network inspect newest-cni-250247]: docker network inspect newest-cni-250247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250247 not found
	I1202 22:17:44.900798  530747 network_create.go:289] output of [docker network inspect newest-cni-250247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250247 not found
	
	** /stderr **
	I1202 22:17:44.900897  530747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:44.919285  530747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:17:44.919603  530747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:17:44.919927  530747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:17:44.920175  530747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:17:44.920559  530747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad4100}
	I1202 22:17:44.920582  530747 network_create.go:124] attempt to create docker network newest-cni-250247 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:17:44.920648  530747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250247 newest-cni-250247
	I1202 22:17:44.974998  530747 network_create.go:108] docker network newest-cni-250247 192.168.85.0/24 created
	I1202 22:17:44.975031  530747 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-250247" container
	I1202 22:17:44.975103  530747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:17:44.990787  530747 cli_runner.go:164] Run: docker volume create newest-cni-250247 --label name.minikube.sigs.k8s.io=newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:17:45.013271  530747 oci.go:103] Successfully created a docker volume newest-cni-250247
	I1202 22:17:45.013406  530747 cli_runner.go:164] Run: docker run --rm --name newest-cni-250247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --entrypoint /usr/bin/test -v newest-cni-250247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:17:45.621475  530747 oci.go:107] Successfully prepared a docker volume newest-cni-250247
	I1202 22:17:45.621530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:17:45.621683  530747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:17:45.621835  530747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:17:45.678934  530747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250247 --name newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250247 --network newest-cni-250247 --ip 192.168.85.2 --volume newest-cni-250247:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:17:45.981381  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Running}}
	I1202 22:17:46.003380  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.032446  530747 cli_runner.go:164] Run: docker exec newest-cni-250247 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:17:46.085911  530747 oci.go:144] the created container "newest-cni-250247" has a running status.
	I1202 22:17:46.085938  530747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa...
	I1202 22:17:46.535806  530747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:17:46.556856  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.574862  530747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:17:46.574884  530747 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:17:46.613075  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.633142  530747 machine.go:94] provisionDockerMachine start ...
	I1202 22:17:46.633247  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:46.649612  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:46.649974  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:46.649985  530747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:17:46.650582  530747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52268->127.0.0.1:33413: read: connection reset by peer
	I1202 22:17:49.801182  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.801207  530747 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:17:49.801275  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.819629  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.819939  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.819956  530747 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:17:49.975371  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.975485  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.993473  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.993863  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.993890  530747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:17:50.158635  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:17:50.158664  530747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:17:50.158693  530747 ubuntu.go:190] setting up certificates
	I1202 22:17:50.158702  530747 provision.go:84] configureAuth start
	I1202 22:17:50.158761  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.179744  530747 provision.go:143] copyHostCerts
	I1202 22:17:50.179822  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:17:50.179837  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:17:50.179915  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:17:50.180033  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:17:50.180044  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:17:50.180070  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:17:50.180125  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:17:50.180135  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:17:50.180158  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:17:50.180217  530747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:17:50.322891  530747 provision.go:177] copyRemoteCerts
	I1202 22:17:50.322959  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:17:50.323002  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.340979  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.445218  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:17:50.462694  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:17:50.479627  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:17:50.496822  530747 provision.go:87] duration metric: took 338.097427ms to configureAuth
	I1202 22:17:50.496849  530747 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:17:50.497071  530747 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:50.497084  530747 machine.go:97] duration metric: took 3.863925712s to provisionDockerMachine
	I1202 22:17:50.497092  530747 client.go:176] duration metric: took 5.630731537s to LocalClient.Create
	I1202 22:17:50.497116  530747 start.go:167] duration metric: took 5.630827551s to libmachine.API.Create "newest-cni-250247"
	I1202 22:17:50.497128  530747 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:17:50.497139  530747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:17:50.497194  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:17:50.497239  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.514214  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.617428  530747 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:17:50.620641  530747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:17:50.620667  530747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:17:50.620679  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:17:50.620732  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:17:50.620820  530747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:17:50.620924  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:17:50.628593  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:17:50.645552  530747 start.go:296] duration metric: took 148.408487ms for postStartSetup
	I1202 22:17:50.646009  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.663355  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:50.663625  530747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:17:50.663682  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.680676  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.782247  530747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:17:50.786667  530747 start.go:128] duration metric: took 5.925896109s to createHost
	I1202 22:17:50.786693  530747 start.go:83] releasing machines lock for "newest-cni-250247", held for 5.926000968s
	I1202 22:17:50.786793  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.811296  530747 ssh_runner.go:195] Run: cat /version.json
	I1202 22:17:50.811346  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.811581  530747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:17:50.811637  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.845065  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.848995  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.954487  530747 ssh_runner.go:195] Run: systemctl --version
	I1202 22:17:51.041838  530747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:17:51.046274  530747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:17:51.046341  530747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:17:51.074410  530747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:17:51.074488  530747 start.go:496] detecting cgroup driver to use...
	I1202 22:17:51.074529  530747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:17:51.074589  530747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:17:51.089808  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:17:51.103374  530747 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:17:51.103448  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:17:51.121828  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:17:51.141789  530747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:17:51.261866  530747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:17:51.373962  530747 docker.go:234] disabling docker service ...
	I1202 22:17:51.374058  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:17:51.395721  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:17:51.408889  530747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:17:51.526784  530747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:17:51.666659  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:17:51.680421  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:17:51.694156  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:17:51.703246  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:17:51.711965  530747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:17:51.712032  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:17:51.720534  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.729291  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:17:51.737871  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.746530  530747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:17:51.754381  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:17:51.763140  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:17:51.771794  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:17:51.780775  530747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:17:51.788502  530747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:17:51.796157  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:17:51.902580  530747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:17:51.980987  530747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:17:51.981071  530747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:17:51.985388  530747 start.go:564] Will wait 60s for crictl version
	I1202 22:17:51.985466  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:51.989692  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:17:52.016719  530747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:17:52.016798  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.038219  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.064913  530747 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:17:52.067896  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:52.085835  530747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:17:52.089730  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:17:52.103887  530747 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:17:52.106903  530747 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:17:52.107043  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:52.107129  530747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:17:52.134576  530747 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:17:52.134599  530747 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:17:52.134654  530747 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.134875  530747 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.135006  530747 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.135100  530747 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.135208  530747 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.135311  530747 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.135412  530747 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.135504  530747 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.137073  530747 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.137501  530747 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.138001  530747 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.138176  530747 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.138332  530747 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.138460  530747 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.138578  530747 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.138691  530747 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.486863  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:17:52.486989  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.505095  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:17:52.505220  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.506565  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:17:52.506632  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.507855  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:17:52.507959  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:17:52.511007  530747 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:17:52.511097  530747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.511158  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.532242  530747 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:17:52.532340  530747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.532411  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.548827  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:17:52.548944  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.555163  530747 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:17:52.555232  530747 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.555287  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555371  530747 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:17:52.555414  530747 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.555456  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555558  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.555663  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.571785  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:17:52.571882  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.572308  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:17:52.572386  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.584466  530747 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:17:52.584539  530747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.584606  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.623084  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.623169  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.623195  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.623234  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.623260  530747 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:17:52.623496  530747 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.623551  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626466  530747 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:17:52.626537  530747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.626592  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626678  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.708388  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.708492  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.708522  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.708744  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.708784  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.708846  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.708845  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808693  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808697  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808758  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.808797  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808930  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.808998  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809059  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809112  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.819994  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.890157  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890191  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:17:52.890263  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890334  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890389  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.890452  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:17:52.890493  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:52.890543  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.890584  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890594  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:17:52.890633  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:17:52.890674  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:52.959924  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:17:52.959968  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:17:52.960046  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:17:52.960065  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:17:52.960114  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960195  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960258  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:17:52.960308  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:52.960375  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.960392  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:17:53.029868  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:17:53.030286  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:17:53.030052  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:17:53.030412  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:17:53.145803  530747 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:53.145876  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:17:53.398719  530747 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:17:53.398959  530747 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:17:53.399035  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.477986  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:17:53.503645  530747 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:17:53.503712  530747 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.503778  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:53.528168  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.528256  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.576028  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770595  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.242308714s)
	I1202 22:17:54.770671  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:17:54.770607  530747 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.194545785s)
	I1202 22:17:54.770798  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770711  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:54.770887  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:55.659136  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:17:55.659166  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659215  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659308  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:56.632721  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:17:56.632828  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:17:56.632885  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:17:56.632905  530747 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:56.632929  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:57.649982  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.017026977s)
	I1202 22:17:57.650005  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:17:57.650033  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650080  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650148  530747 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.017305696s)
	I1202 22:17:57.650163  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:17:57.650176  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:17:58.684406  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.034301571s)
	I1202 22:17:58.684477  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:17:58.684519  530747 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:58.684597  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:18:00.238431  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.553804859s)
	I1202 22:18:00.238459  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:18:00.238485  530747 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.238539  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.740937  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:18:00.741026  530747 cache_images.go:125] Successfully loaded all cached images
	I1202 22:18:00.741044  530747 cache_images.go:94] duration metric: took 8.60643049s to LoadCachedImages
	I1202 22:18:00.741063  530747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:18:00.741200  530747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:18:00.741276  530747 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:18:00.765139  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:18:00.765169  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:18:00.765188  530747 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:18:00.765212  530747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:18:00.765326  530747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:18:00.765433  530747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.773193  530747 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:18:00.773260  530747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.780940  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:18:00.781025  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:18:00.781114  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:18:00.781149  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:18:00.781235  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:18:00.781291  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:18:00.788716  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:18:00.788795  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:18:00.803910  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:18:00.804008  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:18:00.804025  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:18:00.816846  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:18:00.816880  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:18:01.602456  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:18:01.610653  530747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:18:01.624603  530747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:18:01.638373  530747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:18:01.652151  530747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:18:01.656237  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:18:01.666754  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:18:01.784445  530747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:18:01.807464  530747 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:18:01.807485  530747 certs.go:195] generating shared ca certs ...
	I1202 22:18:01.807504  530747 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.807689  530747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:18:01.807752  530747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:18:01.807763  530747 certs.go:257] generating profile certs ...
	I1202 22:18:01.807833  530747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:18:01.807852  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt with IP's: []
	I1202 22:18:01.904440  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt ...
	I1202 22:18:01.904514  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt: {Name:mkac1ba94fca76c17ef6889ccac434c85c3adfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904734  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key ...
	I1202 22:18:01.904773  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key: {Name:mk0c9426196191d76ac8bad3e60a1b42170fc3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904915  530747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:18:01.904963  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:18:02.273695  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde ...
	I1202 22:18:02.273733  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde: {Name:mk485afb3918fbbfcd9c10c46151672750ef52be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.273936  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde ...
	I1202 22:18:02.273952  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde: {Name:mk04b6e3543cdc0fbe6b60437820e2294d1297d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.274073  530747 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt
	I1202 22:18:02.274162  530747 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key
	I1202 22:18:02.274234  530747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:18:02.274255  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt with IP's: []
	I1202 22:18:02.649970  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt ...
	I1202 22:18:02.650005  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt: {Name:mk19f12624bf230a68d68951d2c42662a58d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650189  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key ...
	I1202 22:18:02.650212  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key: {Name:mk073d5c6ce4db6564bbfc911588b213e2c9f7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650417  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:18:02.650468  530747 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:18:02.650481  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:18:02.650512  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:18:02.650542  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:18:02.650565  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:18:02.650614  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:18:02.651161  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:18:02.670316  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:18:02.688640  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:18:02.706448  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:18:02.724635  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:18:02.741956  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:18:02.758769  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:18:02.776648  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:18:02.800104  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:18:02.824626  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:18:02.842845  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:18:02.859809  530747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:18:02.872445  530747 ssh_runner.go:195] Run: openssl version
	I1202 22:18:02.878854  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:18:02.887208  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891035  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891111  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.931750  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:18:02.940012  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:18:02.948063  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951902  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951976  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.992654  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:18:03.001115  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:18:03.011822  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016313  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016431  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.057849  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:18:03.066486  530747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:18:03.070494  530747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:18:03.070546  530747 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:18:03.070624  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:18:03.070686  530747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:18:03.097552  530747 cri.go:89] found id: ""
	I1202 22:18:03.097695  530747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:18:03.105804  530747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:18:03.114013  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:18:03.114153  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:18:03.122166  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:18:03.122190  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:18:03.122266  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:18:03.130248  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:18:03.130314  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:18:03.137979  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:18:03.146142  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:18:03.146218  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:18:03.153915  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.162129  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:18:03.162264  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.170014  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:18:03.178190  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:18:03.178275  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:18:03.185714  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:18:03.223736  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:18:03.223941  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:18:03.308311  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:18:03.308389  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:18:03.308430  530747 kubeadm.go:319] OS: Linux
	I1202 22:18:03.308479  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:18:03.308531  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:18:03.308591  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:18:03.308643  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:18:03.308698  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:18:03.308750  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:18:03.308799  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:18:03.308851  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:18:03.308901  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:18:03.374665  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:18:03.374792  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:18:03.374887  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:18:03.388049  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:18:03.397156  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:18:03.397267  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:18:03.397355  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:18:03.624812  530747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:18:03.988647  530747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:18:04.207719  530747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:18:04.369148  530747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:18:04.533091  530747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:18:04.533470  530747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:04.781495  530747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:18:04.781896  530747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:05.055068  530747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:18:05.269007  530747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:18:05.339371  530747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:18:05.339621  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:18:05.517146  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:18:05.863539  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:18:06.326882  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:18:06.463358  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:18:06.983101  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:18:06.983766  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:18:06.989546  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:18:06.998787  530747 out.go:252]   - Booting up control plane ...
	I1202 22:18:06.998960  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:18:06.999088  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:18:06.999437  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:18:07.023294  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:18:07.023746  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:18:07.031501  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:18:07.032911  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:18:07.033179  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:18:07.172448  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:18:07.172569  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:22:07.172189  530747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061877s
	I1202 22:22:07.172421  530747 kubeadm.go:319] 
	I1202 22:22:07.172498  530747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:22:07.172536  530747 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:22:07.172651  530747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:22:07.172657  530747 kubeadm.go:319] 
	I1202 22:22:07.172769  530747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:22:07.172808  530747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:22:07.172839  530747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:22:07.172843  530747 kubeadm.go:319] 
	I1202 22:22:07.176868  530747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:22:07.177410  530747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:22:07.177535  530747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:22:07.177870  530747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:22:07.177893  530747 kubeadm.go:319] 
	I1202 22:22:07.178011  530747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 22:22:07.178084  530747 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061877s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061877s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 22:22:07.178174  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 22:22:07.582154  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:22:07.596186  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:22:07.596254  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:22:07.604575  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:22:07.604597  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:22:07.604653  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:22:07.612860  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:22:07.612925  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:22:07.620756  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:22:07.628603  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:22:07.628670  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:22:07.636283  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:22:07.644222  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:22:07.644282  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:22:07.651905  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:22:07.659956  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:22:07.660066  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:22:07.667857  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:22:07.708384  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:22:07.708648  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:22:07.776064  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:22:07.776184  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:22:07.776244  530747 kubeadm.go:319] OS: Linux
	I1202 22:22:07.776341  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:22:07.776426  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:22:07.776506  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:22:07.776581  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:22:07.776644  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:22:07.776713  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:22:07.776776  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:22:07.776870  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:22:07.776937  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:22:07.856435  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:22:07.856606  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:22:07.856736  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:22:07.863429  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:22:07.868636  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:22:07.868735  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:22:07.868847  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:22:07.868963  530747 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:22:07.869041  530747 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:22:07.869134  530747 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:22:07.869207  530747 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:22:07.869308  530747 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:22:07.869882  530747 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:22:07.870407  530747 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:22:07.870926  530747 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:22:07.871398  530747 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:22:07.871624  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:22:08.110111  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:22:08.319444  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:22:08.500616  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:22:08.835962  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:22:09.133922  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:22:09.134451  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:22:09.137223  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:22:09.140286  530747 out.go:252]   - Booting up control plane ...
	I1202 22:22:09.140379  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:22:09.140452  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:22:09.141569  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:22:09.162438  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:22:09.162564  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:22:09.170170  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:22:09.170681  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:22:09.170905  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:22:09.303893  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:22:09.304013  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:26:09.304341  530747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829165s
	I1202 22:26:09.304377  530747 kubeadm.go:319] 
	I1202 22:26:09.304436  530747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:26:09.304468  530747 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:26:09.304573  530747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:26:09.304578  530747 kubeadm.go:319] 
	I1202 22:26:09.304682  530747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:26:09.304714  530747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:26:09.304745  530747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:26:09.304749  530747 kubeadm.go:319] 
	I1202 22:26:09.313335  530747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:26:09.313922  530747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:26:09.314042  530747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:26:09.314323  530747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:26:09.314329  530747 kubeadm.go:319] 
	I1202 22:26:09.314401  530747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:26:09.314453  530747 kubeadm.go:403] duration metric: took 8m6.243910977s to StartCluster
	I1202 22:26:09.314492  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:26:09.314550  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:26:09.344686  530747 cri.go:89] found id: ""
	I1202 22:26:09.344749  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.344773  530747 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:26:09.344791  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:26:09.344878  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:26:09.374164  530747 cri.go:89] found id: ""
	I1202 22:26:09.374191  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.374200  530747 logs.go:284] No container was found matching "etcd"
	I1202 22:26:09.374208  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:26:09.374272  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:26:09.397230  530747 cri.go:89] found id: ""
	I1202 22:26:09.397254  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.397263  530747 logs.go:284] No container was found matching "coredns"
	I1202 22:26:09.397269  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:26:09.397328  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:26:09.421960  530747 cri.go:89] found id: ""
	I1202 22:26:09.421985  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.421994  530747 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:26:09.422001  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:26:09.422060  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:26:09.446528  530747 cri.go:89] found id: ""
	I1202 22:26:09.446549  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.446558  530747 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:26:09.446595  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:26:09.446678  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:26:09.471220  530747 cri.go:89] found id: ""
	I1202 22:26:09.471253  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.471262  530747 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:26:09.471268  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:26:09.471341  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:26:09.495256  530747 cri.go:89] found id: ""
	I1202 22:26:09.495280  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.495288  530747 logs.go:284] No container was found matching "kindnet"
	I1202 22:26:09.495298  530747 logs.go:123] Gathering logs for kubelet ...
	I1202 22:26:09.495309  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:26:09.554867  530747 logs.go:123] Gathering logs for dmesg ...
	I1202 22:26:09.554905  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:26:09.573421  530747 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:26:09.573449  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:26:09.657032  530747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:26:09.657063  530747 logs.go:123] Gathering logs for containerd ...
	I1202 22:26:09.657076  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:26:09.698332  530747 logs.go:123] Gathering logs for container status ...
	I1202 22:26:09.698373  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:26:09.727969  530747 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:26:09.728028  530747 out.go:285] * 
	* 
	W1202 22:26:09.728078  530747 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.728093  530747 out.go:285] * 
	* 
	W1202 22:26:09.730426  530747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:26:09.735441  530747 out.go:203] 
	W1202 22:26:09.738403  530747 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.738449  530747 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:26:09.738475  530747 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:26:09.741577  530747 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250247
helpers_test.go:243: (dbg) docker inspect newest-cni-250247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	        "Created": "2025-12-02T22:17:45.695373395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:17:45.76228908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2-json.log",
	        "Name": "/newest-cni-250247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	                "LowerDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250247",
	                "Source": "/var/lib/docker/volumes/newest-cni-250247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250247",
	                "name.minikube.sigs.k8s.io": "newest-cni-250247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d0c64ba16bbb08b47bf29cabc5b530a394e75cd494629324cf5f757a6339c21",
	            "SandboxKey": "/var/run/docker/netns/7d0c64ba16bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:c0:2b:98:94:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfffc9981d9cab6ce5981c2e79bfb0dd15ae8455b64d0bfc795000bbbe273d91",
	                    "EndpointID": "fdf2c5f777ff277e526828919e43c78a65f8b5b8ad0c0be50ec029d55e549da2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250247",
	                        "8d631b193c97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 6 (332.126337ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:26:10.160148  543645 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:23:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:23:22.383311  539599 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:23:22.383495  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.383526  539599 out.go:374] Setting ErrFile to fd 2...
	I1202 22:23:22.383548  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.384147  539599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:23:22.384563  539599 out.go:368] Setting JSON to false
	I1202 22:23:22.385463  539599 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14741,"bootTime":1764699462,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:23:22.385557  539599 start.go:143] virtualization:  
	I1202 22:23:22.388730  539599 out.go:179] * [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:23:22.392696  539599 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:23:22.392797  539599 notify.go:221] Checking for updates...
	I1202 22:23:22.398478  539599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:23:22.401214  539599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:22.404143  539599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:23:22.406933  539599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:23:22.409907  539599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:23:22.413394  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:22.413984  539599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:23:22.437751  539599 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:23:22.437859  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.494629  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.485394686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.494770  539599 docker.go:319] overlay module found
	I1202 22:23:22.499692  539599 out.go:179] * Using the docker driver based on existing profile
	I1202 22:23:22.502553  539599 start.go:309] selected driver: docker
	I1202 22:23:22.502578  539599 start.go:927] validating driver "docker" against &{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.502679  539599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:23:22.503398  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.557969  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.549356271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.558338  539599 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:23:22.558370  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:22.558425  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:22.558467  539599 start.go:353] cluster config:
	{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.561606  539599 out.go:179] * Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	I1202 22:23:22.564413  539599 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:23:22.567370  539599 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:23:22.570230  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:22.570321  539599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:23:22.570386  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.570675  539599 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570753  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:23:22.570769  539599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.774µs
	I1202 22:23:22.570784  539599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:23:22.570800  539599 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570834  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:23:22.570844  539599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 44.823µs
	I1202 22:23:22.570850  539599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570866  539599 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570898  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:23:22.570907  539599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.493µs
	I1202 22:23:22.570915  539599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570926  539599 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570958  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:23:22.570975  539599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.838µs
	I1202 22:23:22.570982  539599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570991  539599 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571040  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:23:22.571049  539599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 59.452µs
	I1202 22:23:22.571055  539599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:23:22.571064  539599 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571094  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:23:22.571103  539599 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.457µs
	I1202 22:23:22.571108  539599 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:23:22.571117  539599 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571146  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:23:22.571154  539599 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 38.03µs
	I1202 22:23:22.571159  539599 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:23:22.571168  539599 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571197  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:23:22.571205  539599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.12µs
	I1202 22:23:22.571211  539599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:23:22.571217  539599 cache.go:87] Successfully saved all images to host disk.
	I1202 22:23:22.590450  539599 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:23:22.590474  539599 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:23:22.590493  539599 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:23:22.590523  539599 start.go:360] acquireMachinesLock for no-preload-904303: {Name:mk2c72bf119f004a39efee961482984889590787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.590579  539599 start.go:364] duration metric: took 35.757µs to acquireMachinesLock for "no-preload-904303"
	I1202 22:23:22.590604  539599 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:23:22.590613  539599 fix.go:54] fixHost starting: 
	I1202 22:23:22.590870  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.607708  539599 fix.go:112] recreateIfNeeded on no-preload-904303: state=Stopped err=<nil>
	W1202 22:23:22.607739  539599 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:23:22.611134  539599 out.go:252] * Restarting existing docker container for "no-preload-904303" ...
	I1202 22:23:22.611232  539599 cli_runner.go:164] Run: docker start no-preload-904303
	I1202 22:23:22.872287  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.898165  539599 kic.go:430] container "no-preload-904303" state is running.
	I1202 22:23:22.898575  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:22.922950  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.923163  539599 machine.go:94] provisionDockerMachine start ...
	I1202 22:23:22.923221  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:22.941136  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:22.941823  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:22.941839  539599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:23:22.942552  539599 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 22:23:26.093608  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.093635  539599 ubuntu.go:182] provisioning hostname "no-preload-904303"
	I1202 22:23:26.093723  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.111423  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.111760  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.111780  539599 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-904303 && echo "no-preload-904303" | sudo tee /etc/hostname
	I1202 22:23:26.266533  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.266609  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.285316  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.285632  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.285686  539599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-904303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-904303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-904303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:23:26.434344  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:23:26.434376  539599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:23:26.434408  539599 ubuntu.go:190] setting up certificates
	I1202 22:23:26.434418  539599 provision.go:84] configureAuth start
	I1202 22:23:26.434484  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:26.452411  539599 provision.go:143] copyHostCerts
	I1202 22:23:26.452491  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:23:26.452511  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:23:26.452589  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:23:26.452740  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:23:26.452752  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:23:26.452788  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:23:26.452857  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:23:26.452867  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:23:26.452891  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:23:26.452955  539599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.no-preload-904303 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-904303]
	I1202 22:23:26.957849  539599 provision.go:177] copyRemoteCerts
	I1202 22:23:26.957926  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:23:26.957991  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.974993  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.078480  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:23:27.095667  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:23:27.113042  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:23:27.131355  539599 provision.go:87] duration metric: took 696.919514ms to configureAuth
	I1202 22:23:27.131382  539599 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:23:27.131619  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:27.131635  539599 machine.go:97] duration metric: took 4.208463625s to provisionDockerMachine
	I1202 22:23:27.131645  539599 start.go:293] postStartSetup for "no-preload-904303" (driver="docker")
	I1202 22:23:27.131661  539599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:23:27.131736  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:23:27.131781  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.148639  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.253719  539599 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:23:27.257164  539599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:23:27.257191  539599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:23:27.257209  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:23:27.257270  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:23:27.257353  539599 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:23:27.257455  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:23:27.264919  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:27.281885  539599 start.go:296] duration metric: took 150.211376ms for postStartSetup
	I1202 22:23:27.281973  539599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:23:27.282024  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.311997  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.415055  539599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:23:27.419717  539599 fix.go:56] duration metric: took 4.829096727s for fixHost
	I1202 22:23:27.419743  539599 start.go:83] releasing machines lock for "no-preload-904303", held for 4.829150862s
	I1202 22:23:27.419810  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:27.437406  539599 ssh_runner.go:195] Run: cat /version.json
	I1202 22:23:27.437474  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.437744  539599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:23:27.437810  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.458622  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.458779  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.657246  539599 ssh_runner.go:195] Run: systemctl --version
	I1202 22:23:27.663712  539599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:23:27.667996  539599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:23:27.668128  539599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:23:27.676170  539599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:23:27.676194  539599 start.go:496] detecting cgroup driver to use...
	I1202 22:23:27.676226  539599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:23:27.676274  539599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:23:27.693769  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:23:27.707725  539599 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:23:27.707786  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:23:27.723493  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:23:27.736424  539599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:23:27.854661  539599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:23:27.963950  539599 docker.go:234] disabling docker service ...
	I1202 22:23:27.964063  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:23:27.978913  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:23:27.991719  539599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:23:28.130013  539599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:23:28.245844  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:23:28.260063  539599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:23:28.275361  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:23:28.284485  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:23:28.293418  539599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:23:28.293496  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:23:28.303246  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.311801  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:23:28.320376  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.329142  539599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:23:28.337281  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:23:28.345966  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:23:28.354511  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:23:28.364146  539599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:23:28.372643  539599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:23:28.380193  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.515073  539599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:23:28.603166  539599 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:23:28.603246  539599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:23:28.607302  539599 start.go:564] Will wait 60s for crictl version
	I1202 22:23:28.607362  539599 ssh_runner.go:195] Run: which crictl
	I1202 22:23:28.610925  539599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:23:28.635209  539599 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:23:28.635324  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.654862  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.679684  539599 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:23:28.682772  539599 cli_runner.go:164] Run: docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:23:28.698164  539599 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:23:28.701843  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.711778  539599 kubeadm.go:884] updating cluster {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:23:28.711898  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:28.711951  539599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:23:28.735775  539599 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:23:28.735798  539599 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:23:28.735806  539599 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:23:28.735943  539599 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-904303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:23:28.736021  539599 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:23:28.764321  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:28.764345  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:28.764366  539599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:23:28.764390  539599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-904303 NodeName:no-preload-904303 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:23:28.764517  539599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-904303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:23:28.764598  539599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:23:28.772222  539599 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:23:28.772309  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:23:28.779448  539599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:23:28.793067  539599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:23:28.808283  539599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 22:23:28.821081  539599 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:23:28.825311  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.834378  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.950783  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:28.967883  539599 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303 for IP: 192.168.76.2
	I1202 22:23:28.967905  539599 certs.go:195] generating shared ca certs ...
	I1202 22:23:28.967921  539599 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:28.968118  539599 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:23:28.968196  539599 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:23:28.968211  539599 certs.go:257] generating profile certs ...
	I1202 22:23:28.968343  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key
	I1202 22:23:28.968433  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d
	I1202 22:23:28.968505  539599 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key
	I1202 22:23:28.968647  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:23:28.968707  539599 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:23:28.968723  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:23:28.968768  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:23:28.968803  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:23:28.968848  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:23:28.968924  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:28.969565  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:23:28.991512  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:23:29.009709  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:23:29.027230  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:23:29.044859  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:23:29.062550  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:23:29.081123  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:23:29.099160  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:23:29.116143  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:23:29.133341  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:23:29.151391  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:23:29.168872  539599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:23:29.181520  539599 ssh_runner.go:195] Run: openssl version
	I1202 22:23:29.187676  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:23:29.196257  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205152  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205469  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.248525  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:23:29.256283  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:23:29.264508  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268219  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268296  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.310186  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:23:29.318336  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:23:29.326934  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330579  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330642  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.371552  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:23:29.379401  539599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:23:29.383174  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:23:29.424449  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:23:29.465479  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:23:29.506825  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:23:29.548324  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:23:29.590054  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:23:29.631250  539599 kubeadm.go:401] StartCluster: {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:29.631343  539599 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:23:29.631409  539599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:23:29.658856  539599 cri.go:89] found id: ""
	I1202 22:23:29.658951  539599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:23:29.666619  539599 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:23:29.666682  539599 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:23:29.666755  539599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:23:29.674368  539599 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:23:29.674844  539599 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.674950  539599 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-904303" cluster setting kubeconfig missing "no-preload-904303" context setting]
	I1202 22:23:29.675336  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.676685  539599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:23:29.684684  539599 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1202 22:23:29.684718  539599 kubeadm.go:602] duration metric: took 18.021774ms to restartPrimaryControlPlane
	I1202 22:23:29.684728  539599 kubeadm.go:403] duration metric: took 53.489812ms to StartCluster
	I1202 22:23:29.684761  539599 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.684832  539599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.685511  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.685828  539599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:23:29.686092  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:29.686168  539599 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:23:29.686270  539599 addons.go:70] Setting storage-provisioner=true in profile "no-preload-904303"
	I1202 22:23:29.686289  539599 addons.go:70] Setting dashboard=true in profile "no-preload-904303"
	I1202 22:23:29.686314  539599 addons.go:239] Setting addon dashboard=true in "no-preload-904303"
	W1202 22:23:29.686329  539599 addons.go:248] addon dashboard should already be in state true
	I1202 22:23:29.686347  539599 addons.go:70] Setting default-storageclass=true in profile "no-preload-904303"
	I1202 22:23:29.686392  539599 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-904303"
	I1202 22:23:29.686365  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.686741  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.687205  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.686300  539599 addons.go:239] Setting addon storage-provisioner=true in "no-preload-904303"
	I1202 22:23:29.687386  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.687839  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.691834  539599 out.go:179] * Verifying Kubernetes components...
	I1202 22:23:29.694973  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:29.723676  539599 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:23:29.726567  539599 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:23:29.727985  539599 addons.go:239] Setting addon default-storageclass=true in "no-preload-904303"
	I1202 22:23:29.728025  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.728437  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.730468  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:23:29.730497  539599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:23:29.730574  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.767956  539599 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:23:29.773779  539599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.773811  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:23:29.773880  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.776990  539599 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:29.777009  539599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:23:29.777075  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.799621  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.808373  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.833909  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.941913  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:29.974317  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.986732  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:23:29.986766  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:23:29.988626  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:30.055576  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:23:30.055607  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:23:30.079790  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:23:30.079826  539599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:23:30.094823  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:23:30.094847  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:23:30.108744  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:23:30.108770  539599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:23:30.122434  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:23:30.122505  539599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:23:30.136402  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:23:30.136428  539599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:23:30.149357  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:23:30.149379  539599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:23:30.162415  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.162439  539599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:23:30.175846  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.793328  539599 node_ready.go:35] waiting up to 6m0s for node "no-preload-904303" to be "Ready" ...
	W1202 22:23:30.793609  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793677  539599 retry.go:31] will retry after 226.751663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793357  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793705  539599 retry.go:31] will retry after 335.186857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793399  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793714  539599 retry.go:31] will retry after 230.72192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.021321  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.024967  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.129921  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.131942  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.131973  539599 retry.go:31] will retry after 517.463505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.159634  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.159717  539599 retry.go:31] will retry after 524.371625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.201804  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.201874  539599 retry.go:31] will retry after 509.080585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.649705  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.685138  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.711681  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.715443  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.715487  539599 retry.go:31] will retry after 516.235738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.771458  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.771492  539599 retry.go:31] will retry after 380.898006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.788553  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.788587  539599 retry.go:31] will retry after 774.998834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.153620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:32.209503  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.209561  539599 retry.go:31] will retry after 823.770631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.232894  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:32.305176  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.305224  539599 retry.go:31] will retry after 976.715215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.563746  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:32.634445  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.634479  539599 retry.go:31] will retry after 1.162769509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:32.794321  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:33.033893  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.114977  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.115010  539599 retry.go:31] will retry after 714.879346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.282251  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:33.363554  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.363588  539599 retry.go:31] will retry after 844.770065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.798288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:33.830720  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.885889  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.885960  539599 retry.go:31] will retry after 916.714322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:33.923753  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.923789  539599 retry.go:31] will retry after 2.520575053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.209208  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:34.271506  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.271542  539599 retry.go:31] will retry after 1.776064467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.803362  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:34.881750  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.881784  539599 retry.go:31] will retry after 1.907866633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:35.294715  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:36.048128  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:36.141002  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.141036  539599 retry.go:31] will retry after 3.038923278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.444914  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:36.502465  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.502494  539599 retry.go:31] will retry after 3.727542871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.789806  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:36.867898  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.867931  539599 retry.go:31] will retry after 1.939289637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:37.294882  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:38.808288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:38.866824  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:38.866857  539599 retry.go:31] will retry after 5.857922191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.180195  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:39.238103  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.238139  539599 retry.go:31] will retry after 4.546361483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:39.794712  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:40.230300  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:40.298135  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:40.298168  539599 retry.go:31] will retry after 2.477378234s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:42.294051  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:42.775949  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:42.856429  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:42.856458  539599 retry.go:31] will retry after 3.440810022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.784770  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:43.864473  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.864510  539599 retry.go:31] will retry after 7.11067177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:44.294480  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:44.725002  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:44.781836  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:44.781872  539599 retry.go:31] will retry after 4.295308457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:46.294868  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:46.298023  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:46.357922  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:46.357955  539599 retry.go:31] will retry after 9.581881684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:48.793879  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:49.077320  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:49.140226  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:49.140259  539599 retry.go:31] will retry after 6.825419406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:50.976239  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:51.036594  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:51.036631  539599 retry.go:31] will retry after 6.351616515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:51.293979  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:53.294465  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:55.294759  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:55.941027  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:55.966502  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:56.026736  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.026772  539599 retry.go:31] will retry after 11.682115483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:56.034530  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.034568  539599 retry.go:31] will retry after 21.573683328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.388457  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:57.448613  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.448645  539599 retry.go:31] will retry after 10.383504228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:57.794117  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:00.314267  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:02.794140  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:05.294024  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.709736  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:07.771482  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.771516  539599 retry.go:31] will retry after 18.342032468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:07.793883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.833189  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:07.897703  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.897737  539599 retry.go:31] will retry after 18.464738845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:09.794642  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:12.294662  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:14.793819  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:17.294881  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:17.608495  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:17.666726  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:17.666760  539599 retry.go:31] will retry after 22.163689128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:19.794151  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:22.293956  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:24.793964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:26.114427  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:26.175665  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.175697  539599 retry.go:31] will retry after 38.633031501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.363620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:26.423962  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.423998  539599 retry.go:31] will retry after 35.128284125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:27.293903  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:29.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:31.794841  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:34.294771  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:36.793951  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:38.794027  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:39.831556  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:39.903792  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:39.903830  539599 retry.go:31] will retry after 44.791338045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:40.794755  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:43.293945  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:45.294875  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:47.793934  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:50.293988  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:52.794271  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:55.293940  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:57.293989  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:59.294085  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:01.552593  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:25:01.623403  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:01.623527  539599 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:01.794122  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:04.293962  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:04.809098  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:25:04.882204  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:04.882306  539599 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:06.793963  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:09.293949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:11.294016  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:13.793954  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:16.293941  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:18.794012  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:21.293858  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:23.293935  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:24.695432  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:25:24.758224  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:24.758320  539599 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:25:24.761124  539599 out.go:179] * Enabled addons: 
	I1202 22:25:24.763849  539599 addons.go:530] duration metric: took 1m55.077683231s for enable addons: enabled=[]
	W1202 22:25:25.294695  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:27.794152  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:29.794455  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:32.294020  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:34.793867  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:36.794964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:39.293887  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:41.294800  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:43.793975  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:46.294913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:48.794053  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:51.294004  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:53.793974  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:55.794883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:58.294031  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:00.298658  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:02.793984  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:05.294880  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:26:09.304341  530747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829165s
	I1202 22:26:09.304377  530747 kubeadm.go:319] 
	I1202 22:26:09.304436  530747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:26:09.304468  530747 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:26:09.304573  530747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:26:09.304578  530747 kubeadm.go:319] 
	I1202 22:26:09.304682  530747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:26:09.304714  530747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:26:09.304745  530747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:26:09.304749  530747 kubeadm.go:319] 
	I1202 22:26:09.313335  530747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:26:09.313922  530747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:26:09.314042  530747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:26:09.314323  530747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:26:09.314329  530747 kubeadm.go:319] 
	I1202 22:26:09.314401  530747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:26:09.314453  530747 kubeadm.go:403] duration metric: took 8m6.243910977s to StartCluster
	I1202 22:26:09.314492  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:26:09.314550  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:26:09.344686  530747 cri.go:89] found id: ""
	I1202 22:26:09.344749  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.344773  530747 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:26:09.344791  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:26:09.344878  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:26:09.374164  530747 cri.go:89] found id: ""
	I1202 22:26:09.374191  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.374200  530747 logs.go:284] No container was found matching "etcd"
	I1202 22:26:09.374208  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:26:09.374272  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:26:09.397230  530747 cri.go:89] found id: ""
	I1202 22:26:09.397254  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.397263  530747 logs.go:284] No container was found matching "coredns"
	I1202 22:26:09.397269  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:26:09.397328  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:26:09.421960  530747 cri.go:89] found id: ""
	I1202 22:26:09.421985  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.421994  530747 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:26:09.422001  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:26:09.422060  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:26:09.446528  530747 cri.go:89] found id: ""
	I1202 22:26:09.446549  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.446558  530747 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:26:09.446595  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:26:09.446678  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:26:09.471220  530747 cri.go:89] found id: ""
	I1202 22:26:09.471253  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.471262  530747 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:26:09.471268  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:26:09.471341  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:26:09.495256  530747 cri.go:89] found id: ""
	I1202 22:26:09.495280  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.495288  530747 logs.go:284] No container was found matching "kindnet"
	I1202 22:26:09.495298  530747 logs.go:123] Gathering logs for kubelet ...
	I1202 22:26:09.495309  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:26:09.554867  530747 logs.go:123] Gathering logs for dmesg ...
	I1202 22:26:09.554905  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:26:09.573421  530747 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:26:09.573449  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:26:09.657032  530747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:26:09.657063  530747 logs.go:123] Gathering logs for containerd ...
	I1202 22:26:09.657076  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:26:09.698332  530747 logs.go:123] Gathering logs for container status ...
	I1202 22:26:09.698373  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:26:09.727969  530747 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:26:09.728028  530747 out.go:285] * 
	W1202 22:26:09.728078  530747 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.728093  530747 out.go:285] * 
	W1202 22:26:09.730426  530747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:26:09.735441  530747 out.go:203] 
	W1202 22:26:09.738403  530747 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.738449  530747 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:26:09.738475  530747 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:26:09.741577  530747 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:17:54 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:54.779220611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.651148199Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.653829279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.668308994Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.669111160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.622984489Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.625140122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.632796897Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.633245999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.640322955Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.642498821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.650351217Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.651336556Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.673716263Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.675881036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.687140143Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.687773748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.213301739Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.215802009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.235906934Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.236893889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.731026224Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.733363816Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.742879446Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.743378803Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:26:10.855199    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:10.856617    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:10.857073    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:10.858560    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:10.859139    5553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:26:10 up  4:08,  0 user,  load average: 0.14, 0.69, 1.25
	Linux newest-cni-250247 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:26:07 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:08 newest-cni-250247 kubelet[5357]: E1202 22:26:08.086604    5357 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:08 newest-cni-250247 kubelet[5362]: E1202 22:26:08.845245    5362 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:26:08 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:26:09 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 02 22:26:09 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:09 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:09 newest-cni-250247 kubelet[5426]: E1202 22:26:09.611275    5426 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:26:09 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:26:09 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:26:10 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 22:26:10 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:10 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:26:10 newest-cni-250247 kubelet[5470]: E1202 22:26:10.364681    5470 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:26:10 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:26:10 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 6 (357.60844ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:26:11.403380  543871 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-250247" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (507.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-904303 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-904303 create -f testdata/busybox.yaml: exit status 1 (50.920557ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-904303" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-904303 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:12:48.960673074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2565d89b5b0cac53d37704e84ed068e1e8f9fea06698cfb7e3bf5fa82431969c",
	            "SandboxKey": "/var/run/docker/netns/2565d89b5b0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ce:be:b1:c3:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "36cc446e2b4667656204614f2648dd0b57c6c026ff3e894f2ade69f763222166",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 6 (363.369819ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:20.874619  536500 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-996157 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:17:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:17:44.404531  530747 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:17:44.404670  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.404684  530747 out.go:374] Setting ErrFile to fd 2...
	I1202 22:17:44.404690  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.405094  530747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:17:44.405637  530747 out.go:368] Setting JSON to false
	I1202 22:17:44.406740  530747 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14403,"bootTime":1764699462,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:17:44.406830  530747 start.go:143] virtualization:  
	I1202 22:17:44.410982  530747 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:17:44.415278  530747 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:17:44.415454  530747 notify.go:221] Checking for updates...
	I1202 22:17:44.421699  530747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:17:44.424811  530747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:17:44.427830  530747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:17:44.430886  530747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:17:44.433744  530747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:17:44.437092  530747 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:44.437182  530747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:17:44.470020  530747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:17:44.470192  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.532667  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.522777992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.532772  530747 docker.go:319] overlay module found
	I1202 22:17:44.536064  530747 out.go:179] * Using the docker driver based on user configuration
	I1202 22:17:44.538930  530747 start.go:309] selected driver: docker
	I1202 22:17:44.538949  530747 start.go:927] validating driver "docker" against <nil>
	I1202 22:17:44.538963  530747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:17:44.539711  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.592441  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.583215128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.592603  530747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 22:17:44.592632  530747 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 22:17:44.592854  530747 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:17:44.595765  530747 out.go:179] * Using Docker driver with root privileges
	I1202 22:17:44.598585  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:17:44.598655  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:17:44.598670  530747 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:17:44.598768  530747 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:17:44.601883  530747 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:17:44.604845  530747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:17:44.607693  530747 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:17:44.610530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:44.610603  530747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:17:44.634404  530747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:17:44.634428  530747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:17:44.673223  530747 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:17:44.860204  530747 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:17:44.860409  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:44.860446  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json: {Name:mk97b8ae8c3d085bfd853be8a3ae939898e326ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:17:44.860450  530747 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860569  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:17:44.860579  530747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.118µs
	I1202 22:17:44.860592  530747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:17:44.860605  530747 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860619  530747 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:17:44.860635  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:17:44.860641  530747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.841µs
	I1202 22:17:44.860647  530747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860645  530747 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860657  530747 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860684  530747 start.go:364] duration metric: took 30.103µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:17:44.860691  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:17:44.860696  530747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.401µs
	I1202 22:17:44.860705  530747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860715  530747 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860742  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:17:44.860702  530747 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:17:44.860748  530747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.394µs
	I1202 22:17:44.860757  530747 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:17:44.860763  530747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860772  530747 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860797  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:17:44.860802  530747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 22:17:44.860818  530747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860827  530747 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860853  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:17:44.860858  530747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.933µs
	I1202 22:17:44.860886  530747 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860912  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:17:44.860917  530747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.835µs
	I1202 22:17:44.860923  530747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:17:44.860864  530747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:17:44.860872  530747 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.861203  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:17:44.861213  530747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 340.707µs
	I1202 22:17:44.861221  530747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:17:44.861233  530747 cache.go:87] Successfully saved all images to host disk.
	I1202 22:17:44.866001  530747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:17:44.866290  530747 start.go:159] libmachine.API.Create for "newest-cni-250247" (driver="docker")
	I1202 22:17:44.866351  530747 client.go:173] LocalClient.Create starting
	I1202 22:17:44.866422  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:17:44.866458  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866484  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866546  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:17:44.866568  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866582  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866956  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:17:44.882056  530747 cli_runner.go:211] docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:17:44.882136  530747 network_create.go:284] running [docker network inspect newest-cni-250247] to gather additional debugging logs...
	I1202 22:17:44.882156  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247
	W1202 22:17:44.900735  530747 cli_runner.go:211] docker network inspect newest-cni-250247 returned with exit code 1
	I1202 22:17:44.900767  530747 network_create.go:287] error running [docker network inspect newest-cni-250247]: docker network inspect newest-cni-250247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250247 not found
	I1202 22:17:44.900798  530747 network_create.go:289] output of [docker network inspect newest-cni-250247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250247 not found
	
	** /stderr **
	I1202 22:17:44.900897  530747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:44.919285  530747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:17:44.919603  530747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:17:44.919927  530747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:17:44.920175  530747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:17:44.920559  530747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad4100}
	I1202 22:17:44.920582  530747 network_create.go:124] attempt to create docker network newest-cni-250247 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:17:44.920648  530747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250247 newest-cni-250247
	I1202 22:17:44.974998  530747 network_create.go:108] docker network newest-cni-250247 192.168.85.0/24 created
	I1202 22:17:44.975031  530747 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-250247" container
	I1202 22:17:44.975103  530747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:17:44.990787  530747 cli_runner.go:164] Run: docker volume create newest-cni-250247 --label name.minikube.sigs.k8s.io=newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:17:45.013271  530747 oci.go:103] Successfully created a docker volume newest-cni-250247
	I1202 22:17:45.013406  530747 cli_runner.go:164] Run: docker run --rm --name newest-cni-250247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --entrypoint /usr/bin/test -v newest-cni-250247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:17:45.621475  530747 oci.go:107] Successfully prepared a docker volume newest-cni-250247
	I1202 22:17:45.621530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:17:45.621683  530747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:17:45.621835  530747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:17:45.678934  530747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250247 --name newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250247 --network newest-cni-250247 --ip 192.168.85.2 --volume newest-cni-250247:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:17:45.981381  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Running}}
	I1202 22:17:46.003380  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.032446  530747 cli_runner.go:164] Run: docker exec newest-cni-250247 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:17:46.085911  530747 oci.go:144] the created container "newest-cni-250247" has a running status.
	I1202 22:17:46.085938  530747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa...
	I1202 22:17:46.535806  530747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:17:46.556856  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.574862  530747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:17:46.574884  530747 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:17:46.613075  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.633142  530747 machine.go:94] provisionDockerMachine start ...
	I1202 22:17:46.633247  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:46.649612  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:46.649974  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:46.649985  530747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:17:46.650582  530747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52268->127.0.0.1:33413: read: connection reset by peer
	I1202 22:17:49.801182  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.801207  530747 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:17:49.801275  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.819629  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.819939  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.819956  530747 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:17:49.975371  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.975485  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.993473  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.993863  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.993890  530747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:17:50.158635  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:17:50.158664  530747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:17:50.158693  530747 ubuntu.go:190] setting up certificates
	I1202 22:17:50.158702  530747 provision.go:84] configureAuth start
	I1202 22:17:50.158761  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.179744  530747 provision.go:143] copyHostCerts
	I1202 22:17:50.179822  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:17:50.179837  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:17:50.179915  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:17:50.180033  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:17:50.180044  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:17:50.180070  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:17:50.180125  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:17:50.180135  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:17:50.180158  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:17:50.180217  530747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:17:50.322891  530747 provision.go:177] copyRemoteCerts
	I1202 22:17:50.322959  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:17:50.323002  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.340979  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.445218  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:17:50.462694  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:17:50.479627  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:17:50.496822  530747 provision.go:87] duration metric: took 338.097427ms to configureAuth
	I1202 22:17:50.496849  530747 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:17:50.497071  530747 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:50.497084  530747 machine.go:97] duration metric: took 3.863925712s to provisionDockerMachine
	I1202 22:17:50.497092  530747 client.go:176] duration metric: took 5.630731537s to LocalClient.Create
	I1202 22:17:50.497116  530747 start.go:167] duration metric: took 5.630827551s to libmachine.API.Create "newest-cni-250247"
	I1202 22:17:50.497128  530747 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:17:50.497139  530747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:17:50.497194  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:17:50.497239  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.514214  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.617428  530747 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:17:50.620641  530747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:17:50.620667  530747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:17:50.620679  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:17:50.620732  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:17:50.620820  530747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:17:50.620924  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:17:50.628593  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:17:50.645552  530747 start.go:296] duration metric: took 148.408487ms for postStartSetup
	I1202 22:17:50.646009  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.663355  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:50.663625  530747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:17:50.663682  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.680676  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.782247  530747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:17:50.786667  530747 start.go:128] duration metric: took 5.925896109s to createHost
	I1202 22:17:50.786693  530747 start.go:83] releasing machines lock for "newest-cni-250247", held for 5.926000968s
	I1202 22:17:50.786793  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.811296  530747 ssh_runner.go:195] Run: cat /version.json
	I1202 22:17:50.811346  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.811581  530747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:17:50.811637  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.845065  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.848995  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.954487  530747 ssh_runner.go:195] Run: systemctl --version
	I1202 22:17:51.041838  530747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:17:51.046274  530747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:17:51.046341  530747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:17:51.074410  530747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:17:51.074488  530747 start.go:496] detecting cgroup driver to use...
	I1202 22:17:51.074529  530747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:17:51.074589  530747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:17:51.089808  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:17:51.103374  530747 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:17:51.103448  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:17:51.121828  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:17:51.141789  530747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:17:51.261866  530747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:17:51.373962  530747 docker.go:234] disabling docker service ...
	I1202 22:17:51.374058  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:17:51.395721  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:17:51.408889  530747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:17:51.526784  530747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:17:51.666659  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:17:51.680421  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:17:51.694156  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:17:51.703246  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:17:51.711965  530747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:17:51.712032  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:17:51.720534  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.729291  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:17:51.737871  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.746530  530747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:17:51.754381  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:17:51.763140  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:17:51.771794  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:17:51.780775  530747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:17:51.788502  530747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:17:51.796157  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:17:51.902580  530747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:17:51.980987  530747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:17:51.981071  530747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:17:51.985388  530747 start.go:564] Will wait 60s for crictl version
	I1202 22:17:51.985466  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:51.989692  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:17:52.016719  530747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:17:52.016798  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.038219  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.064913  530747 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:17:52.067896  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:52.085835  530747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:17:52.089730  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:17:52.103887  530747 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:17:52.106903  530747 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:17:52.107043  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:52.107129  530747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:17:52.134576  530747 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:17:52.134599  530747 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:17:52.134654  530747 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.134875  530747 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.135006  530747 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.135100  530747 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.135208  530747 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.135311  530747 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.135412  530747 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.135504  530747 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.137073  530747 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.137501  530747 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.138001  530747 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.138176  530747 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.138332  530747 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.138460  530747 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.138578  530747 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.138691  530747 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.486863  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:17:52.486989  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.505095  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:17:52.505220  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.506565  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:17:52.506632  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.507855  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:17:52.507959  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:17:52.511007  530747 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:17:52.511097  530747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.511158  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.532242  530747 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:17:52.532340  530747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.532411  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.548827  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:17:52.548944  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.555163  530747 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:17:52.555232  530747 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.555287  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555371  530747 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:17:52.555414  530747 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.555456  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555558  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.555663  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.571785  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:17:52.571882  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.572308  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:17:52.572386  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.584466  530747 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:17:52.584539  530747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.584606  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.623084  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.623169  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.623195  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.623234  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.623260  530747 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:17:52.623496  530747 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.623551  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626466  530747 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:17:52.626537  530747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.626592  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626678  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.708388  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.708492  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.708522  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.708744  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.708784  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.708846  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.708845  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808693  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808697  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808758  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.808797  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808930  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.808998  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809059  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809112  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.819994  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.890157  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890191  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:17:52.890263  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890334  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890389  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.890452  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:17:52.890493  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:52.890543  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.890584  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890594  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:17:52.890633  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:17:52.890674  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:52.959924  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:17:52.959968  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:17:52.960046  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:17:52.960065  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:17:52.960114  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960195  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960258  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:17:52.960308  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:52.960375  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.960392  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:17:53.029868  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:17:53.030286  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:17:53.030052  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:17:53.030412  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:17:53.145803  530747 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:53.145876  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:17:53.398719  530747 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:17:53.398959  530747 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:17:53.399035  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.477986  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:17:53.503645  530747 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:17:53.503712  530747 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.503778  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:53.528168  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.528256  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.576028  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770595  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.242308714s)
	I1202 22:17:54.770671  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:17:54.770607  530747 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.194545785s)
	I1202 22:17:54.770798  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770711  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:54.770887  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:55.659136  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:17:55.659166  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659215  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659308  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:56.632721  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:17:56.632828  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:17:56.632885  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:17:56.632905  530747 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:56.632929  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:57.649982  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.017026977s)
	I1202 22:17:57.650005  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:17:57.650033  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650080  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650148  530747 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.017305696s)
	I1202 22:17:57.650163  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:17:57.650176  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:17:58.684406  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.034301571s)
	I1202 22:17:58.684477  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:17:58.684519  530747 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:58.684597  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:18:00.238431  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.553804859s)
	I1202 22:18:00.238459  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:18:00.238485  530747 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.238539  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.740937  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:18:00.741026  530747 cache_images.go:125] Successfully loaded all cached images
	I1202 22:18:00.741044  530747 cache_images.go:94] duration metric: took 8.60643049s to LoadCachedImages
	I1202 22:18:00.741063  530747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:18:00.741200  530747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:18:00.741276  530747 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:18:00.765139  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:18:00.765169  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:18:00.765188  530747 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:18:00.765212  530747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:18:00.765326  530747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:18:00.765433  530747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.773193  530747 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:18:00.773260  530747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.780940  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:18:00.781025  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:18:00.781114  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:18:00.781149  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:18:00.781235  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:18:00.781291  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:18:00.788716  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:18:00.788795  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:18:00.803910  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:18:00.804008  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:18:00.804025  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:18:00.816846  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:18:00.816880  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:18:01.602456  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:18:01.610653  530747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:18:01.624603  530747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:18:01.638373  530747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:18:01.652151  530747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:18:01.656237  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:18:01.666754  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:18:01.784445  530747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:18:01.807464  530747 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:18:01.807485  530747 certs.go:195] generating shared ca certs ...
	I1202 22:18:01.807504  530747 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.807689  530747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:18:01.807752  530747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:18:01.807763  530747 certs.go:257] generating profile certs ...
	I1202 22:18:01.807833  530747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:18:01.807852  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt with IP's: []
	I1202 22:18:01.904440  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt ...
	I1202 22:18:01.904514  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt: {Name:mkac1ba94fca76c17ef6889ccac434c85c3adfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904734  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key ...
	I1202 22:18:01.904773  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key: {Name:mk0c9426196191d76ac8bad3e60a1b42170fc3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904915  530747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:18:01.904963  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:18:02.273695  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde ...
	I1202 22:18:02.273733  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde: {Name:mk485afb3918fbbfcd9c10c46151672750ef52be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.273936  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde ...
	I1202 22:18:02.273952  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde: {Name:mk04b6e3543cdc0fbe6b60437820e2294d1297d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.274073  530747 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt
	I1202 22:18:02.274162  530747 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key
	I1202 22:18:02.274234  530747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:18:02.274255  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt with IP's: []
	I1202 22:18:02.649970  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt ...
	I1202 22:18:02.650005  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt: {Name:mk19f12624bf230a68d68951d2c42662a58d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650189  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key ...
	I1202 22:18:02.650212  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key: {Name:mk073d5c6ce4db6564bbfc911588b213e2c9f7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650417  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:18:02.650468  530747 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:18:02.650481  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:18:02.650512  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:18:02.650542  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:18:02.650565  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:18:02.650614  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:18:02.651161  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:18:02.670316  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:18:02.688640  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:18:02.706448  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:18:02.724635  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:18:02.741956  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:18:02.758769  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:18:02.776648  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:18:02.800104  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:18:02.824626  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:18:02.842845  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:18:02.859809  530747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:18:02.872445  530747 ssh_runner.go:195] Run: openssl version
	I1202 22:18:02.878854  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:18:02.887208  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891035  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891111  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.931750  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:18:02.940012  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:18:02.948063  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951902  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951976  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.992654  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:18:03.001115  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:18:03.011822  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016313  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016431  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.057849  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:18:03.066486  530747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:18:03.070494  530747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:18:03.070546  530747 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:18:03.070624  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:18:03.070686  530747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:18:03.097552  530747 cri.go:89] found id: ""
	I1202 22:18:03.097695  530747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:18:03.105804  530747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:18:03.114013  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:18:03.114153  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:18:03.122166  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:18:03.122190  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:18:03.122266  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:18:03.130248  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:18:03.130314  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:18:03.137979  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:18:03.146142  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:18:03.146218  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:18:03.153915  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.162129  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:18:03.162264  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.170014  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:18:03.178190  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:18:03.178275  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:18:03.185714  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:18:03.223736  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:18:03.223941  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:18:03.308311  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:18:03.308389  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:18:03.308430  530747 kubeadm.go:319] OS: Linux
	I1202 22:18:03.308479  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:18:03.308531  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:18:03.308591  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:18:03.308643  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:18:03.308698  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:18:03.308750  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:18:03.308799  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:18:03.308851  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:18:03.308901  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:18:03.374665  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:18:03.374792  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:18:03.374887  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:18:03.388049  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:18:03.397156  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:18:03.397267  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:18:03.397355  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:18:03.624812  530747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:18:03.988647  530747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:18:04.207719  530747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:18:04.369148  530747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:18:04.533091  530747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:18:04.533470  530747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:04.781495  530747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:18:04.781896  530747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:05.055068  530747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:18:05.269007  530747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:18:05.339371  530747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:18:05.339621  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:18:05.517146  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:18:05.863539  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:18:06.326882  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:18:06.463358  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:18:06.983101  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:18:06.983766  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:18:06.989546  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:18:06.998787  530747 out.go:252]   - Booting up control plane ...
	I1202 22:18:06.998960  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:18:06.999088  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:18:06.999437  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:18:07.023294  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:18:07.023746  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:18:07.031501  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:18:07.032911  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:18:07.033179  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:18:07.172448  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:18:07.172569  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:21:18.402255  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00114361s
	I1202 22:21:18.402290  510395 kubeadm.go:319] 
	I1202 22:21:18.402400  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:21:18.402462  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:21:18.403019  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:21:18.403038  510395 kubeadm.go:319] 
	I1202 22:21:18.403228  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:21:18.403293  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:21:18.403358  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:21:18.403371  510395 kubeadm.go:319] 
	I1202 22:21:18.408627  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:21:18.409060  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:21:18.409175  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:21:18.409412  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:21:18.409421  510395 kubeadm.go:319] 
	I1202 22:21:18.409510  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:21:18.409568  510395 kubeadm.go:403] duration metric: took 8m6.498664339s to StartCluster
	I1202 22:21:18.409608  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:21:18.409703  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:21:18.433892  510395 cri.go:89] found id: ""
	I1202 22:21:18.433920  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.433929  510395 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:21:18.433935  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:21:18.433997  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:21:18.458135  510395 cri.go:89] found id: ""
	I1202 22:21:18.458168  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.458177  510395 logs.go:284] No container was found matching "etcd"
	I1202 22:21:18.458184  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:21:18.458251  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:21:18.487703  510395 cri.go:89] found id: ""
	I1202 22:21:18.487726  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.487735  510395 logs.go:284] No container was found matching "coredns"
	I1202 22:21:18.487742  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:21:18.487825  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:21:18.511734  510395 cri.go:89] found id: ""
	I1202 22:21:18.511757  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.511766  510395 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:21:18.511773  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:21:18.511833  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:21:18.535676  510395 cri.go:89] found id: ""
	I1202 22:21:18.535701  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.535710  510395 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:21:18.535717  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:21:18.535778  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:21:18.608686  510395 cri.go:89] found id: ""
	I1202 22:21:18.608714  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.608733  510395 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:21:18.608740  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:21:18.608810  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:21:18.636332  510395 cri.go:89] found id: ""
	I1202 22:21:18.636357  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.636366  510395 logs.go:284] No container was found matching "kindnet"
	I1202 22:21:18.636377  510395 logs.go:123] Gathering logs for container status ...
	I1202 22:21:18.636389  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:21:18.666396  510395 logs.go:123] Gathering logs for kubelet ...
	I1202 22:21:18.666423  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:21:18.724901  510395 logs.go:123] Gathering logs for dmesg ...
	I1202 22:21:18.724937  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:21:18.740835  510395 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:21:18.740863  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:21:18.806977  510395 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:21:18.806999  510395 logs.go:123] Gathering logs for containerd ...
	I1202 22:21:18.807011  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1202 22:21:18.849559  510395 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:21:18.849624  510395 out.go:285] * 
	W1202 22:21:18.849687  510395 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.849738  510395 out.go:285] * 
	W1202 22:21:18.851859  510395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:21:18.857885  510395 out.go:203] 
	W1202 22:21:18.861761  510395 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.861806  510395 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:21:18.861829  510395 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:21:18.865566  510395 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:13:00 no-preload-904303 containerd[758]: time="2025-12-02T22:13:00.056487043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.309327150Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.311580305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.319050815Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.320996139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.675616176Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.678085817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698352226Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698661001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.162074992Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.178922518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.243438144Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.244266609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.858893547Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.861151683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.869754107Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.870787170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.086495518Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.094498321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.103914819Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.106075735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.551856390Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.553967518Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561245033Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561946477Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:21.490130    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:21.490912    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:21.492528    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:21.492938    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:21.494492    5673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:21:21 up  4:03,  0 user,  load average: 0.18, 0.98, 1.50
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:21:18 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:18 no-preload-904303 kubelet[5402]: E1202 22:21:18.625591    5402 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:18 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:18 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:19 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:19 no-preload-904303 kubelet[5456]: E1202 22:21:19.394195    5456 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 kubelet[5554]: E1202 22:21:20.154013    5554 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 kubelet[5587]: E1202 22:21:20.853868    5587 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 6 (323.869041ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:21.974131  536730 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:12:48.960673074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2565d89b5b0cac53d37704e84ed068e1e8f9fea06698cfb7e3bf5fa82431969c",
	            "SandboxKey": "/var/run/docker/netns/2565d89b5b0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ce:be:b1:c3:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "36cc446e2b4667656204614f2648dd0b57c6c026ff3e894f2ade69f763222166",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
E1202 22:21:22.156801  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 6 (320.585837ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:22.305313  536808 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ unpause │ -p old-k8s-version-996157 --alsologtostderr -v=1                                                                                                                                                                                                           │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:17:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:17:44.404531  530747 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:17:44.404670  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.404684  530747 out.go:374] Setting ErrFile to fd 2...
	I1202 22:17:44.404690  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.405094  530747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:17:44.405637  530747 out.go:368] Setting JSON to false
	I1202 22:17:44.406740  530747 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14403,"bootTime":1764699462,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:17:44.406830  530747 start.go:143] virtualization:  
	I1202 22:17:44.410982  530747 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:17:44.415278  530747 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:17:44.415454  530747 notify.go:221] Checking for updates...
	I1202 22:17:44.421699  530747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:17:44.424811  530747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:17:44.427830  530747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:17:44.430886  530747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:17:44.433744  530747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:17:44.437092  530747 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:44.437182  530747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:17:44.470020  530747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:17:44.470192  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.532667  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.522777992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.532772  530747 docker.go:319] overlay module found
	I1202 22:17:44.536064  530747 out.go:179] * Using the docker driver based on user configuration
	I1202 22:17:44.538930  530747 start.go:309] selected driver: docker
	I1202 22:17:44.538949  530747 start.go:927] validating driver "docker" against <nil>
	I1202 22:17:44.538963  530747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:17:44.539711  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.592441  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.583215128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.592603  530747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 22:17:44.592632  530747 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 22:17:44.592854  530747 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:17:44.595765  530747 out.go:179] * Using Docker driver with root privileges
	I1202 22:17:44.598585  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:17:44.598655  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:17:44.598670  530747 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:17:44.598768  530747 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:17:44.601883  530747 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:17:44.604845  530747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:17:44.607693  530747 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:17:44.610530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:44.610603  530747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:17:44.634404  530747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:17:44.634428  530747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:17:44.673223  530747 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:17:44.860204  530747 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:17:44.860409  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:44.860446  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json: {Name:mk97b8ae8c3d085bfd853be8a3ae939898e326ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:17:44.860450  530747 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860569  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:17:44.860579  530747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.118µs
	I1202 22:17:44.860592  530747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:17:44.860605  530747 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860619  530747 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:17:44.860635  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:17:44.860641  530747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.841µs
	I1202 22:17:44.860647  530747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860645  530747 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860657  530747 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860684  530747 start.go:364] duration metric: took 30.103µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:17:44.860691  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:17:44.860696  530747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.401µs
	I1202 22:17:44.860705  530747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860715  530747 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860742  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:17:44.860702  530747 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:17:44.860748  530747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.394µs
	I1202 22:17:44.860757  530747 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:17:44.860763  530747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860772  530747 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860797  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:17:44.860802  530747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 22:17:44.860818  530747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860827  530747 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860853  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:17:44.860858  530747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.933µs
	I1202 22:17:44.860886  530747 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860912  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:17:44.860917  530747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.835µs
	I1202 22:17:44.860923  530747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:17:44.860864  530747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:17:44.860872  530747 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.861203  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:17:44.861213  530747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 340.707µs
	I1202 22:17:44.861221  530747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:17:44.861233  530747 cache.go:87] Successfully saved all images to host disk.
	I1202 22:17:44.866001  530747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:17:44.866290  530747 start.go:159] libmachine.API.Create for "newest-cni-250247" (driver="docker")
	I1202 22:17:44.866351  530747 client.go:173] LocalClient.Create starting
	I1202 22:17:44.866422  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:17:44.866458  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866484  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866546  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:17:44.866568  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866582  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866956  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:17:44.882056  530747 cli_runner.go:211] docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:17:44.882136  530747 network_create.go:284] running [docker network inspect newest-cni-250247] to gather additional debugging logs...
	I1202 22:17:44.882156  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247
	W1202 22:17:44.900735  530747 cli_runner.go:211] docker network inspect newest-cni-250247 returned with exit code 1
	I1202 22:17:44.900767  530747 network_create.go:287] error running [docker network inspect newest-cni-250247]: docker network inspect newest-cni-250247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250247 not found
	I1202 22:17:44.900798  530747 network_create.go:289] output of [docker network inspect newest-cni-250247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250247 not found
	
	** /stderr **
	I1202 22:17:44.900897  530747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:44.919285  530747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:17:44.919603  530747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:17:44.919927  530747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:17:44.920175  530747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:17:44.920559  530747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad4100}
	I1202 22:17:44.920582  530747 network_create.go:124] attempt to create docker network newest-cni-250247 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:17:44.920648  530747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250247 newest-cni-250247
	I1202 22:17:44.974998  530747 network_create.go:108] docker network newest-cni-250247 192.168.85.0/24 created
	I1202 22:17:44.975031  530747 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-250247" container
	I1202 22:17:44.975103  530747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:17:44.990787  530747 cli_runner.go:164] Run: docker volume create newest-cni-250247 --label name.minikube.sigs.k8s.io=newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:17:45.013271  530747 oci.go:103] Successfully created a docker volume newest-cni-250247
	I1202 22:17:45.013406  530747 cli_runner.go:164] Run: docker run --rm --name newest-cni-250247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --entrypoint /usr/bin/test -v newest-cni-250247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:17:45.621475  530747 oci.go:107] Successfully prepared a docker volume newest-cni-250247
	I1202 22:17:45.621530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:17:45.621683  530747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:17:45.621835  530747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:17:45.678934  530747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250247 --name newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250247 --network newest-cni-250247 --ip 192.168.85.2 --volume newest-cni-250247:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:17:45.981381  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Running}}
	I1202 22:17:46.003380  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.032446  530747 cli_runner.go:164] Run: docker exec newest-cni-250247 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:17:46.085911  530747 oci.go:144] the created container "newest-cni-250247" has a running status.
	I1202 22:17:46.085938  530747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa...
	I1202 22:17:46.535806  530747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:17:46.556856  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.574862  530747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:17:46.574884  530747 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:17:46.613075  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.633142  530747 machine.go:94] provisionDockerMachine start ...
	I1202 22:17:46.633247  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:46.649612  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:46.649974  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:46.649985  530747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:17:46.650582  530747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52268->127.0.0.1:33413: read: connection reset by peer
	I1202 22:17:49.801182  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.801207  530747 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:17:49.801275  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.819629  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.819939  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.819956  530747 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:17:49.975371  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.975485  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.993473  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.993863  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.993890  530747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:17:50.158635  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:17:50.158664  530747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:17:50.158693  530747 ubuntu.go:190] setting up certificates
	I1202 22:17:50.158702  530747 provision.go:84] configureAuth start
	I1202 22:17:50.158761  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.179744  530747 provision.go:143] copyHostCerts
	I1202 22:17:50.179822  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:17:50.179837  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:17:50.179915  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:17:50.180033  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:17:50.180044  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:17:50.180070  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:17:50.180125  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:17:50.180135  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:17:50.180158  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:17:50.180217  530747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:17:50.322891  530747 provision.go:177] copyRemoteCerts
	I1202 22:17:50.322959  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:17:50.323002  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.340979  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.445218  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:17:50.462694  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:17:50.479627  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:17:50.496822  530747 provision.go:87] duration metric: took 338.097427ms to configureAuth
	I1202 22:17:50.496849  530747 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:17:50.497071  530747 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:50.497084  530747 machine.go:97] duration metric: took 3.863925712s to provisionDockerMachine
	I1202 22:17:50.497092  530747 client.go:176] duration metric: took 5.630731537s to LocalClient.Create
	I1202 22:17:50.497116  530747 start.go:167] duration metric: took 5.630827551s to libmachine.API.Create "newest-cni-250247"
	I1202 22:17:50.497128  530747 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:17:50.497139  530747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:17:50.497194  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:17:50.497239  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.514214  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.617428  530747 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:17:50.620641  530747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:17:50.620667  530747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:17:50.620679  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:17:50.620732  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:17:50.620820  530747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:17:50.620924  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:17:50.628593  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:17:50.645552  530747 start.go:296] duration metric: took 148.408487ms for postStartSetup
	I1202 22:17:50.646009  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.663355  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:50.663625  530747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:17:50.663682  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.680676  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.782247  530747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:17:50.786667  530747 start.go:128] duration metric: took 5.925896109s to createHost
	I1202 22:17:50.786693  530747 start.go:83] releasing machines lock for "newest-cni-250247", held for 5.926000968s
	I1202 22:17:50.786793  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.811296  530747 ssh_runner.go:195] Run: cat /version.json
	I1202 22:17:50.811346  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.811581  530747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:17:50.811637  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.845065  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.848995  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.954487  530747 ssh_runner.go:195] Run: systemctl --version
	I1202 22:17:51.041838  530747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:17:51.046274  530747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:17:51.046341  530747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:17:51.074410  530747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:17:51.074488  530747 start.go:496] detecting cgroup driver to use...
	I1202 22:17:51.074529  530747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:17:51.074589  530747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:17:51.089808  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:17:51.103374  530747 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:17:51.103448  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:17:51.121828  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:17:51.141789  530747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:17:51.261866  530747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:17:51.373962  530747 docker.go:234] disabling docker service ...
	I1202 22:17:51.374058  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:17:51.395721  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:17:51.408889  530747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:17:51.526784  530747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:17:51.666659  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:17:51.680421  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:17:51.694156  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:17:51.703246  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:17:51.711965  530747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:17:51.712032  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:17:51.720534  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.729291  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:17:51.737871  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.746530  530747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:17:51.754381  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:17:51.763140  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:17:51.771794  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:17:51.780775  530747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:17:51.788502  530747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:17:51.796157  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:17:51.902580  530747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:17:51.980987  530747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:17:51.981071  530747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:17:51.985388  530747 start.go:564] Will wait 60s for crictl version
	I1202 22:17:51.985466  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:51.989692  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:17:52.016719  530747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:17:52.016798  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.038219  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.064913  530747 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:17:52.067896  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:52.085835  530747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:17:52.089730  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:17:52.103887  530747 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:17:52.106903  530747 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:17:52.107043  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:52.107129  530747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:17:52.134576  530747 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:17:52.134599  530747 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:17:52.134654  530747 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.134875  530747 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.135006  530747 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.135100  530747 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.135208  530747 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.135311  530747 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.135412  530747 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.135504  530747 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.137073  530747 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.137501  530747 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.138001  530747 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.138176  530747 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.138332  530747 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.138460  530747 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.138578  530747 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.138691  530747 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.486863  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:17:52.486989  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.505095  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:17:52.505220  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.506565  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:17:52.506632  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.507855  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:17:52.507959  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:17:52.511007  530747 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:17:52.511097  530747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.511158  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.532242  530747 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:17:52.532340  530747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.532411  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.548827  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:17:52.548944  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.555163  530747 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:17:52.555232  530747 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.555287  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555371  530747 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:17:52.555414  530747 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.555456  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555558  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.555663  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.571785  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:17:52.571882  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.572308  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:17:52.572386  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.584466  530747 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:17:52.584539  530747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.584606  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.623084  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.623169  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.623195  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.623234  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.623260  530747 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:17:52.623496  530747 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.623551  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626466  530747 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:17:52.626537  530747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.626592  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626678  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.708388  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.708492  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.708522  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.708744  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.708784  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.708846  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.708845  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808693  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808697  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808758  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.808797  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808930  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.808998  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809059  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809112  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.819994  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.890157  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890191  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:17:52.890263  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890334  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890389  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.890452  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:17:52.890493  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:52.890543  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.890584  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890594  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:17:52.890633  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:17:52.890674  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:52.959924  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:17:52.959968  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:17:52.960046  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:17:52.960065  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:17:52.960114  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960195  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960258  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:17:52.960308  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:52.960375  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.960392  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:17:53.029868  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:17:53.030286  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:17:53.030052  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:17:53.030412  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:17:53.145803  530747 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:53.145876  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:17:53.398719  530747 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:17:53.398959  530747 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:17:53.399035  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.477986  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:17:53.503645  530747 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:17:53.503712  530747 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.503778  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:53.528168  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.528256  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.576028  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770595  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.242308714s)
	I1202 22:17:54.770671  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:17:54.770607  530747 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.194545785s)
	I1202 22:17:54.770798  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770711  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:54.770887  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:55.659136  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:17:55.659166  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659215  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659308  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:56.632721  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:17:56.632828  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:17:56.632885  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:17:56.632905  530747 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:56.632929  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:57.649982  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.017026977s)
	I1202 22:17:57.650005  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:17:57.650033  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650080  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650148  530747 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.017305696s)
	I1202 22:17:57.650163  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:17:57.650176  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:17:58.684406  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.034301571s)
	I1202 22:17:58.684477  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:17:58.684519  530747 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:58.684597  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:18:00.238431  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.553804859s)
	I1202 22:18:00.238459  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:18:00.238485  530747 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.238539  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.740937  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:18:00.741026  530747 cache_images.go:125] Successfully loaded all cached images
	I1202 22:18:00.741044  530747 cache_images.go:94] duration metric: took 8.60643049s to LoadCachedImages
	I1202 22:18:00.741063  530747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:18:00.741200  530747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:18:00.741276  530747 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:18:00.765139  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:18:00.765169  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:18:00.765188  530747 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:18:00.765212  530747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:18:00.765326  530747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:18:00.765433  530747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.773193  530747 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:18:00.773260  530747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.780940  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:18:00.781025  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:18:00.781114  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:18:00.781149  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:18:00.781235  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:18:00.781291  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:18:00.788716  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:18:00.788795  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:18:00.803910  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:18:00.804008  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:18:00.804025  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:18:00.816846  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:18:00.816880  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:18:01.602456  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:18:01.610653  530747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:18:01.624603  530747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:18:01.638373  530747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:18:01.652151  530747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:18:01.656237  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:18:01.666754  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:18:01.784445  530747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:18:01.807464  530747 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:18:01.807485  530747 certs.go:195] generating shared ca certs ...
	I1202 22:18:01.807504  530747 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.807689  530747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:18:01.807752  530747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:18:01.807763  530747 certs.go:257] generating profile certs ...
	I1202 22:18:01.807833  530747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:18:01.807852  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt with IP's: []
	I1202 22:18:01.904440  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt ...
	I1202 22:18:01.904514  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt: {Name:mkac1ba94fca76c17ef6889ccac434c85c3adfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904734  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key ...
	I1202 22:18:01.904773  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key: {Name:mk0c9426196191d76ac8bad3e60a1b42170fc3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904915  530747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:18:01.904963  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:18:02.273695  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde ...
	I1202 22:18:02.273733  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde: {Name:mk485afb3918fbbfcd9c10c46151672750ef52be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.273936  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde ...
	I1202 22:18:02.273952  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde: {Name:mk04b6e3543cdc0fbe6b60437820e2294d1297d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.274073  530747 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt
	I1202 22:18:02.274162  530747 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key
	I1202 22:18:02.274234  530747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:18:02.274255  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt with IP's: []
	I1202 22:18:02.649970  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt ...
	I1202 22:18:02.650005  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt: {Name:mk19f12624bf230a68d68951d2c42662a58d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650189  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key ...
	I1202 22:18:02.650212  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key: {Name:mk073d5c6ce4db6564bbfc911588b213e2c9f7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650417  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:18:02.650468  530747 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:18:02.650481  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:18:02.650512  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:18:02.650542  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:18:02.650565  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:18:02.650614  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:18:02.651161  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:18:02.670316  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:18:02.688640  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:18:02.706448  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:18:02.724635  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:18:02.741956  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:18:02.758769  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:18:02.776648  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:18:02.800104  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:18:02.824626  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:18:02.842845  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:18:02.859809  530747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:18:02.872445  530747 ssh_runner.go:195] Run: openssl version
	I1202 22:18:02.878854  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:18:02.887208  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891035  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891111  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.931750  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:18:02.940012  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:18:02.948063  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951902  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951976  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.992654  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:18:03.001115  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:18:03.011822  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016313  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016431  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.057849  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:18:03.066486  530747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:18:03.070494  530747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:18:03.070546  530747 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:18:03.070624  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:18:03.070686  530747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:18:03.097552  530747 cri.go:89] found id: ""
	I1202 22:18:03.097695  530747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:18:03.105804  530747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:18:03.114013  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:18:03.114153  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:18:03.122166  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:18:03.122190  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:18:03.122266  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:18:03.130248  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:18:03.130314  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:18:03.137979  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:18:03.146142  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:18:03.146218  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:18:03.153915  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.162129  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:18:03.162264  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.170014  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:18:03.178190  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:18:03.178275  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:18:03.185714  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:18:03.223736  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:18:03.223941  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:18:03.308311  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:18:03.308389  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:18:03.308430  530747 kubeadm.go:319] OS: Linux
	I1202 22:18:03.308479  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:18:03.308531  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:18:03.308591  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:18:03.308643  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:18:03.308698  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:18:03.308750  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:18:03.308799  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:18:03.308851  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:18:03.308901  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:18:03.374665  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:18:03.374792  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:18:03.374887  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:18:03.388049  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:18:03.397156  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:18:03.397267  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:18:03.397355  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:18:03.624812  530747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:18:03.988647  530747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:18:04.207719  530747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:18:04.369148  530747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:18:04.533091  530747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:18:04.533470  530747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:04.781495  530747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:18:04.781896  530747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:05.055068  530747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:18:05.269007  530747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:18:05.339371  530747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:18:05.339621  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:18:05.517146  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:18:05.863539  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:18:06.326882  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:18:06.463358  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:18:06.983101  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:18:06.983766  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:18:06.989546  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:18:06.998787  530747 out.go:252]   - Booting up control plane ...
	I1202 22:18:06.998960  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:18:06.999088  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:18:06.999437  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:18:07.023294  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:18:07.023746  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:18:07.031501  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:18:07.032911  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:18:07.033179  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:18:07.172448  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:18:07.172569  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:21:18.402255  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00114361s
	I1202 22:21:18.402290  510395 kubeadm.go:319] 
	I1202 22:21:18.402400  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:21:18.402462  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:21:18.403019  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:21:18.403038  510395 kubeadm.go:319] 
	I1202 22:21:18.403228  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:21:18.403293  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:21:18.403358  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:21:18.403371  510395 kubeadm.go:319] 
	I1202 22:21:18.408627  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:21:18.409060  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:21:18.409175  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:21:18.409412  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:21:18.409421  510395 kubeadm.go:319] 
	I1202 22:21:18.409510  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:21:18.409568  510395 kubeadm.go:403] duration metric: took 8m6.498664339s to StartCluster
	I1202 22:21:18.409608  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:21:18.409703  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:21:18.433892  510395 cri.go:89] found id: ""
	I1202 22:21:18.433920  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.433929  510395 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:21:18.433935  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:21:18.433997  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:21:18.458135  510395 cri.go:89] found id: ""
	I1202 22:21:18.458168  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.458177  510395 logs.go:284] No container was found matching "etcd"
	I1202 22:21:18.458184  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:21:18.458251  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:21:18.487703  510395 cri.go:89] found id: ""
	I1202 22:21:18.487726  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.487735  510395 logs.go:284] No container was found matching "coredns"
	I1202 22:21:18.487742  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:21:18.487825  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:21:18.511734  510395 cri.go:89] found id: ""
	I1202 22:21:18.511757  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.511766  510395 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:21:18.511773  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:21:18.511833  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:21:18.535676  510395 cri.go:89] found id: ""
	I1202 22:21:18.535701  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.535710  510395 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:21:18.535717  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:21:18.535778  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:21:18.608686  510395 cri.go:89] found id: ""
	I1202 22:21:18.608714  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.608733  510395 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:21:18.608740  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:21:18.608810  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:21:18.636332  510395 cri.go:89] found id: ""
	I1202 22:21:18.636357  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.636366  510395 logs.go:284] No container was found matching "kindnet"
	I1202 22:21:18.636377  510395 logs.go:123] Gathering logs for container status ...
	I1202 22:21:18.636389  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:21:18.666396  510395 logs.go:123] Gathering logs for kubelet ...
	I1202 22:21:18.666423  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:21:18.724901  510395 logs.go:123] Gathering logs for dmesg ...
	I1202 22:21:18.724937  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:21:18.740835  510395 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:21:18.740863  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:21:18.806977  510395 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:21:18.806999  510395 logs.go:123] Gathering logs for containerd ...
	I1202 22:21:18.807011  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1202 22:21:18.849559  510395 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:21:18.849624  510395 out.go:285] * 
	W1202 22:21:18.849687  510395 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.849738  510395 out.go:285] * 
	W1202 22:21:18.851859  510395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:21:18.857885  510395 out.go:203] 
	W1202 22:21:18.861761  510395 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.861806  510395 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:21:18.861829  510395 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:21:18.865566  510395 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:13:00 no-preload-904303 containerd[758]: time="2025-12-02T22:13:00.056487043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.309327150Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.311580305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.319050815Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.320996139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.675616176Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.678085817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698352226Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698661001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.162074992Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.178922518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.243438144Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.244266609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.858893547Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.861151683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.869754107Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.870787170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.086495518Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.094498321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.103914819Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.106075735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.551856390Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.553967518Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561245033Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561946477Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:22.957140    5808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:22.957786    5808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:22.959287    5808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:22.959715    5808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:22.961156    5808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:21:23 up  4:03,  0 user,  load average: 0.18, 0.98, 1.50
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:21:19 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 kubelet[5554]: E1202 22:21:20.154013    5554 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:20 no-preload-904303 kubelet[5587]: E1202 22:21:20.853868    5587 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:20 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:21 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:21 no-preload-904303 kubelet[5684]: E1202 22:21:21.649199    5684 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:21 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:21 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:21:22 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 02 22:21:22 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:22 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:21:22 no-preload-904303 kubelet[5720]: E1202 22:21:22.383989    5720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:21:22 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:21:22 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 6 (353.186578ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:21:23.463856  537044 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (117.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1202 22:21:27.278531  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:28.654407  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:37.520896  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:56.357817  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:21:58.004620  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:22:33.490898  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:22:36.513776  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:22:38.966003  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:22:50.414338  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.796451466s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-904303 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-904303 describe deploy/metrics-server -n kube-system: exit status 1 (83.744017ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-904303" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-904303 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510696,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:12:48.960673074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2565d89b5b0cac53d37704e84ed068e1e8f9fea06698cfb7e3bf5fa82431969c",
	            "SandboxKey": "/var/run/docker/netns/2565d89b5b0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:ce:be:b1:c3:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "36cc446e2b4667656204614f2648dd0b57c6c026ff3e894f2ade69f763222166",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 6 (319.835845ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:23:19.688732  539081 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ delete  │ -p old-k8s-version-996157                                                                                                                                                                                                                                  │ old-k8s-version-996157       │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:12 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:12 UTC │ 02 Dec 25 22:13 UTC │
	│ addons  │ enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                   │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:17:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:17:44.404531  530747 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:17:44.404670  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.404684  530747 out.go:374] Setting ErrFile to fd 2...
	I1202 22:17:44.404690  530747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:17:44.405094  530747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:17:44.405637  530747 out.go:368] Setting JSON to false
	I1202 22:17:44.406740  530747 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14403,"bootTime":1764699462,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:17:44.406830  530747 start.go:143] virtualization:  
	I1202 22:17:44.410982  530747 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:17:44.415278  530747 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:17:44.415454  530747 notify.go:221] Checking for updates...
	I1202 22:17:44.421699  530747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:17:44.424811  530747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:17:44.427830  530747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:17:44.430886  530747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:17:44.433744  530747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:17:44.437092  530747 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:44.437182  530747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:17:44.470020  530747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:17:44.470192  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.532667  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.522777992 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.532772  530747 docker.go:319] overlay module found
	I1202 22:17:44.536064  530747 out.go:179] * Using the docker driver based on user configuration
	I1202 22:17:44.538930  530747 start.go:309] selected driver: docker
	I1202 22:17:44.538949  530747 start.go:927] validating driver "docker" against <nil>
	I1202 22:17:44.538963  530747 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:17:44.539711  530747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:17:44.592441  530747 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:17:44.583215128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:17:44.592603  530747 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1202 22:17:44.592632  530747 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1202 22:17:44.592854  530747 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:17:44.595765  530747 out.go:179] * Using Docker driver with root privileges
	I1202 22:17:44.598585  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:17:44.598655  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:17:44.598670  530747 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 22:17:44.598768  530747 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:17:44.601883  530747 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:17:44.604845  530747 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:17:44.607693  530747 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:17:44.610530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:44.610603  530747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:17:44.634404  530747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:17:44.634428  530747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:17:44.673223  530747 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:17:44.860204  530747 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:17:44.860409  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:44.860446  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json: {Name:mk97b8ae8c3d085bfd853be8a3ae939898e326ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:17:44.860450  530747 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860569  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:17:44.860579  530747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 143.118µs
	I1202 22:17:44.860592  530747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:17:44.860605  530747 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860619  530747 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:17:44.860635  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:17:44.860641  530747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 37.841µs
	I1202 22:17:44.860647  530747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860645  530747 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860657  530747 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860684  530747 start.go:364] duration metric: took 30.103µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:17:44.860691  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:17:44.860696  530747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 40.401µs
	I1202 22:17:44.860705  530747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860715  530747 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860742  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:17:44.860702  530747 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:17:44.860748  530747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 33.394µs
	I1202 22:17:44.860757  530747 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:17:44.860763  530747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860772  530747 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860797  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:17:44.860802  530747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 31.228µs
	I1202 22:17:44.860818  530747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:17:44.860827  530747 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860853  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:17:44.860858  530747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 31.933µs
	I1202 22:17:44.860886  530747 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.860912  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:17:44.860917  530747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 31.835µs
	I1202 22:17:44.860923  530747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:17:44.860864  530747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:17:44.860872  530747 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:17:44.861203  530747 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:17:44.861213  530747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 340.707µs
	I1202 22:17:44.861221  530747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:17:44.861233  530747 cache.go:87] Successfully saved all images to host disk.
	I1202 22:17:44.866001  530747 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:17:44.866290  530747 start.go:159] libmachine.API.Create for "newest-cni-250247" (driver="docker")
	I1202 22:17:44.866351  530747 client.go:173] LocalClient.Create starting
	I1202 22:17:44.866422  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:17:44.866458  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866484  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866546  530747 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:17:44.866568  530747 main.go:143] libmachine: Decoding PEM data...
	I1202 22:17:44.866582  530747 main.go:143] libmachine: Parsing certificate...
	I1202 22:17:44.866956  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:17:44.882056  530747 cli_runner.go:211] docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:17:44.882136  530747 network_create.go:284] running [docker network inspect newest-cni-250247] to gather additional debugging logs...
	I1202 22:17:44.882156  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247
	W1202 22:17:44.900735  530747 cli_runner.go:211] docker network inspect newest-cni-250247 returned with exit code 1
	I1202 22:17:44.900767  530747 network_create.go:287] error running [docker network inspect newest-cni-250247]: docker network inspect newest-cni-250247: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250247 not found
	I1202 22:17:44.900798  530747 network_create.go:289] output of [docker network inspect newest-cni-250247]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250247 not found
	
	** /stderr **
	I1202 22:17:44.900897  530747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:44.919285  530747 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:17:44.919603  530747 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:17:44.919927  530747 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:17:44.920175  530747 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:17:44.920559  530747 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ad4100}
	I1202 22:17:44.920582  530747 network_create.go:124] attempt to create docker network newest-cni-250247 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:17:44.920648  530747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250247 newest-cni-250247
	I1202 22:17:44.974998  530747 network_create.go:108] docker network newest-cni-250247 192.168.85.0/24 created
	I1202 22:17:44.975031  530747 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-250247" container
	I1202 22:17:44.975103  530747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:17:44.990787  530747 cli_runner.go:164] Run: docker volume create newest-cni-250247 --label name.minikube.sigs.k8s.io=newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:17:45.013271  530747 oci.go:103] Successfully created a docker volume newest-cni-250247
	I1202 22:17:45.013406  530747 cli_runner.go:164] Run: docker run --rm --name newest-cni-250247-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --entrypoint /usr/bin/test -v newest-cni-250247:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:17:45.621475  530747 oci.go:107] Successfully prepared a docker volume newest-cni-250247
	I1202 22:17:45.621530  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	W1202 22:17:45.621683  530747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:17:45.621835  530747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:17:45.678934  530747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250247 --name newest-cni-250247 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250247 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250247 --network newest-cni-250247 --ip 192.168.85.2 --volume newest-cni-250247:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:17:45.981381  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Running}}
	I1202 22:17:46.003380  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.032446  530747 cli_runner.go:164] Run: docker exec newest-cni-250247 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:17:46.085911  530747 oci.go:144] the created container "newest-cni-250247" has a running status.
	I1202 22:17:46.085938  530747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa...
	I1202 22:17:46.535806  530747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:17:46.556856  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.574862  530747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:17:46.574884  530747 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250247 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:17:46.613075  530747 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:17:46.633142  530747 machine.go:94] provisionDockerMachine start ...
	I1202 22:17:46.633247  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:46.649612  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:46.649974  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:46.649985  530747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:17:46.650582  530747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52268->127.0.0.1:33413: read: connection reset by peer
	I1202 22:17:49.801182  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.801207  530747 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:17:49.801275  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.819629  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.819939  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.819956  530747 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:17:49.975371  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:17:49.975485  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:49.993473  530747 main.go:143] libmachine: Using SSH client type: native
	I1202 22:17:49.993863  530747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33413 <nil> <nil>}
	I1202 22:17:49.993890  530747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:17:50.158635  530747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:17:50.158664  530747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:17:50.158693  530747 ubuntu.go:190] setting up certificates
	I1202 22:17:50.158702  530747 provision.go:84] configureAuth start
	I1202 22:17:50.158761  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.179744  530747 provision.go:143] copyHostCerts
	I1202 22:17:50.179822  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:17:50.179837  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:17:50.179915  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:17:50.180033  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:17:50.180044  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:17:50.180070  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:17:50.180125  530747 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:17:50.180135  530747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:17:50.180158  530747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:17:50.180217  530747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:17:50.322891  530747 provision.go:177] copyRemoteCerts
	I1202 22:17:50.322959  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:17:50.323002  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.340979  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.445218  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:17:50.462694  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:17:50.479627  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:17:50.496822  530747 provision.go:87] duration metric: took 338.097427ms to configureAuth
	I1202 22:17:50.496849  530747 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:17:50.497071  530747 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:17:50.497084  530747 machine.go:97] duration metric: took 3.863925712s to provisionDockerMachine
	I1202 22:17:50.497092  530747 client.go:176] duration metric: took 5.630731537s to LocalClient.Create
	I1202 22:17:50.497116  530747 start.go:167] duration metric: took 5.630827551s to libmachine.API.Create "newest-cni-250247"
	I1202 22:17:50.497128  530747 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:17:50.497139  530747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:17:50.497194  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:17:50.497239  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.514214  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.617428  530747 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:17:50.620641  530747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:17:50.620667  530747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:17:50.620679  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:17:50.620732  530747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:17:50.620820  530747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:17:50.620924  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:17:50.628593  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:17:50.645552  530747 start.go:296] duration metric: took 148.408487ms for postStartSetup
	I1202 22:17:50.646009  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.663355  530747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:17:50.663625  530747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:17:50.663682  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.680676  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.782247  530747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:17:50.786667  530747 start.go:128] duration metric: took 5.925896109s to createHost
	I1202 22:17:50.786693  530747 start.go:83] releasing machines lock for "newest-cni-250247", held for 5.926000968s
	I1202 22:17:50.786793  530747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:17:50.811296  530747 ssh_runner.go:195] Run: cat /version.json
	I1202 22:17:50.811346  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.811581  530747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:17:50.811637  530747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:17:50.845065  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.848995  530747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:17:50.954487  530747 ssh_runner.go:195] Run: systemctl --version
	I1202 22:17:51.041838  530747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:17:51.046274  530747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:17:51.046341  530747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:17:51.074410  530747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:17:51.074488  530747 start.go:496] detecting cgroup driver to use...
	I1202 22:17:51.074529  530747 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:17:51.074589  530747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:17:51.089808  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:17:51.103374  530747 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:17:51.103448  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:17:51.121828  530747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:17:51.141789  530747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:17:51.261866  530747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:17:51.373962  530747 docker.go:234] disabling docker service ...
	I1202 22:17:51.374058  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:17:51.395721  530747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:17:51.408889  530747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:17:51.526784  530747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:17:51.666659  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:17:51.680421  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:17:51.694156  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:17:51.703246  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:17:51.711965  530747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:17:51.712032  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:17:51.720534  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.729291  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:17:51.737871  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:17:51.746530  530747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:17:51.754381  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:17:51.763140  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:17:51.771794  530747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:17:51.780775  530747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:17:51.788502  530747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:17:51.796157  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:17:51.902580  530747 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:17:51.980987  530747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:17:51.981071  530747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:17:51.985388  530747 start.go:564] Will wait 60s for crictl version
	I1202 22:17:51.985466  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:51.989692  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:17:52.016719  530747 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:17:52.016798  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.038219  530747 ssh_runner.go:195] Run: containerd --version
	I1202 22:17:52.064913  530747 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:17:52.067896  530747 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:17:52.085835  530747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:17:52.089730  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:17:52.103887  530747 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:17:52.106903  530747 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:17:52.107043  530747 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:17:52.107129  530747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:17:52.134576  530747 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1202 22:17:52.134599  530747 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1202 22:17:52.134654  530747 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.134875  530747 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.135006  530747 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.135100  530747 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.135208  530747 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.135311  530747 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.135412  530747 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.135504  530747 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.137073  530747 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:52.137501  530747 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.138001  530747 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.138176  530747 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.138332  530747 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.138460  530747 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.138578  530747 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.138691  530747 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.486863  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
	I1202 22:17:52.486989  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.505095  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
	I1202 22:17:52.505220  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.506565  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
	I1202 22:17:52.506632  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.507855  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
	I1202 22:17:52.507959  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1202 22:17:52.511007  530747 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
	I1202 22:17:52.511097  530747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.511158  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.532242  530747 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
	I1202 22:17:52.532340  530747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.532411  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.548827  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
	I1202 22:17:52.548944  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.555163  530747 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I1202 22:17:52.555232  530747 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.555287  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555371  530747 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1202 22:17:52.555414  530747 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1202 22:17:52.555456  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.555558  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.555663  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.571785  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
	I1202 22:17:52.571882  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.572308  530747 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
	I1202 22:17:52.572386  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.584466  530747 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
	I1202 22:17:52.584539  530747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.584606  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.623084  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.623169  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.623195  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.623234  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.623260  530747 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
	I1202 22:17:52.623496  530747 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.623551  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626466  530747 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
	I1202 22:17:52.626537  530747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.626592  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:52.626678  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.708388  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.708492  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1202 22:17:52.708522  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.708744  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.708784  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1202 22:17:52.708846  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.708845  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808693  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1202 22:17:52.808697  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808758  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.808797  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:52.808930  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.808998  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809059  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:52.809112  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1202 22:17:52.819994  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1202 22:17:52.890157  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890191  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
	I1202 22:17:52.890263  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890334  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:52.890389  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1202 22:17:52.890452  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I1202 22:17:52.890493  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:52.890543  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1202 22:17:52.890584  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.890594  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
	I1202 22:17:52.890633  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1202 22:17:52.890674  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:52.959924  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1202 22:17:52.959968  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1202 22:17:52.960046  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1202 22:17:52.960065  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I1202 22:17:52.960114  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960195  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:52.960258  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
	I1202 22:17:52.960308  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:52.960375  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1202 22:17:52.960392  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
	I1202 22:17:53.029868  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1202 22:17:53.030286  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
	I1202 22:17:53.030052  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1202 22:17:53.030412  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
	I1202 22:17:53.145803  530747 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1202 22:17:53.145876  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	W1202 22:17:53.398719  530747 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1202 22:17:53.398959  530747 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
	I1202 22:17:53.399035  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.477986  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1202 22:17:53.503645  530747 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1202 22:17:53.503712  530747 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:53.503778  530747 ssh_runner.go:195] Run: which crictl
	I1202 22:17:53.528168  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.528256  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1202 22:17:53.576028  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770595  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.242308714s)
	I1202 22:17:54.770671  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1202 22:17:54.770607  530747 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.194545785s)
	I1202 22:17:54.770798  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:54.770711  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:54.770887  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1202 22:17:55.659136  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1202 22:17:55.659166  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659215  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1202 22:17:55.659308  530747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:17:56.632721  530747 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1202 22:17:56.632828  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:17:56.632885  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1202 22:17:56.632905  530747 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:56.632929  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
	I1202 22:17:57.649982  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1: (1.017026977s)
	I1202 22:17:57.650005  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1202 22:17:57.650033  530747 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650080  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1202 22:17:57.650148  530747 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.017305696s)
	I1202 22:17:57.650163  530747 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1202 22:17:57.650176  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1202 22:17:58.684406  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.034301571s)
	I1202 22:17:58.684477  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1202 22:17:58.684519  530747 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:17:58.684597  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
	I1202 22:18:00.238431  530747 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.553804859s)
	I1202 22:18:00.238459  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1202 22:18:00.238485  530747 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.238539  530747 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1202 22:18:00.740937  530747 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1202 22:18:00.741026  530747 cache_images.go:125] Successfully loaded all cached images
	I1202 22:18:00.741044  530747 cache_images.go:94] duration metric: took 8.60643049s to LoadCachedImages
	I1202 22:18:00.741063  530747 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:18:00.741200  530747 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:18:00.741276  530747 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:18:00.765139  530747 cni.go:84] Creating CNI manager for ""
	I1202 22:18:00.765169  530747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:18:00.765188  530747 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:18:00.765212  530747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:18:00.765326  530747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:18:00.765433  530747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.773193  530747 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1202 22:18:00.773260  530747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:18:00.780940  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
	I1202 22:18:00.781025  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1202 22:18:00.781114  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
	I1202 22:18:00.781149  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:18:00.781235  530747 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
	I1202 22:18:00.781291  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1202 22:18:00.788716  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1202 22:18:00.788795  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
	I1202 22:18:00.803910  530747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1202 22:18:00.804008  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1202 22:18:00.804025  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
	I1202 22:18:00.816846  530747 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1202 22:18:00.816880  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
	I1202 22:18:01.602456  530747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:18:01.610653  530747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:18:01.624603  530747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:18:01.638373  530747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:18:01.652151  530747 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:18:01.656237  530747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:18:01.666754  530747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:18:01.784445  530747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:18:01.807464  530747 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:18:01.807485  530747 certs.go:195] generating shared ca certs ...
	I1202 22:18:01.807504  530747 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.807689  530747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:18:01.807752  530747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:18:01.807763  530747 certs.go:257] generating profile certs ...
	I1202 22:18:01.807833  530747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:18:01.807852  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt with IP's: []
	I1202 22:18:01.904440  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt ...
	I1202 22:18:01.904514  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.crt: {Name:mkac1ba94fca76c17ef6889ccac434c85c3adfde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904734  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key ...
	I1202 22:18:01.904773  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key: {Name:mk0c9426196191d76ac8bad3e60a1b42170fc3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:01.904915  530747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:18:01.904963  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:18:02.273695  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde ...
	I1202 22:18:02.273733  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde: {Name:mk485afb3918fbbfcd9c10c46151672750ef52be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.273936  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde ...
	I1202 22:18:02.273952  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde: {Name:mk04b6e3543cdc0fbe6b60437820e2294d1297d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.274073  530747 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt
	I1202 22:18:02.274162  530747 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key
	I1202 22:18:02.274234  530747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:18:02.274255  530747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt with IP's: []
	I1202 22:18:02.649970  530747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt ...
	I1202 22:18:02.650005  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt: {Name:mk19f12624bf230a68d68951d2c42662a58d37e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650189  530747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key ...
	I1202 22:18:02.650212  530747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key: {Name:mk073d5c6ce4db6564bbfc911588b213e2c9f7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:18:02.650417  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:18:02.650468  530747 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:18:02.650481  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:18:02.650512  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:18:02.650542  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:18:02.650565  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:18:02.650614  530747 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:18:02.651161  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:18:02.670316  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:18:02.688640  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:18:02.706448  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:18:02.724635  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:18:02.741956  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:18:02.758769  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:18:02.776648  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:18:02.800104  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:18:02.824626  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:18:02.842845  530747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:18:02.859809  530747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:18:02.872445  530747 ssh_runner.go:195] Run: openssl version
	I1202 22:18:02.878854  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:18:02.887208  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891035  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.891111  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:18:02.931750  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:18:02.940012  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:18:02.948063  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951902  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.951976  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:18:02.992654  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:18:03.001115  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:18:03.011822  530747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016313  530747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.016431  530747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:18:03.057849  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:18:03.066486  530747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:18:03.070494  530747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:18:03.070546  530747 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:18:03.070624  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:18:03.070686  530747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:18:03.097552  530747 cri.go:89] found id: ""
	I1202 22:18:03.097695  530747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:18:03.105804  530747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:18:03.114013  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:18:03.114153  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:18:03.122166  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:18:03.122190  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:18:03.122266  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:18:03.130248  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:18:03.130314  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:18:03.137979  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:18:03.146142  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:18:03.146218  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:18:03.153915  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.162129  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:18:03.162264  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:18:03.170014  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:18:03.178190  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:18:03.178275  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:18:03.185714  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:18:03.223736  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:18:03.223941  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:18:03.308311  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:18:03.308389  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:18:03.308430  530747 kubeadm.go:319] OS: Linux
	I1202 22:18:03.308479  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:18:03.308531  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:18:03.308591  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:18:03.308643  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:18:03.308698  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:18:03.308750  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:18:03.308799  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:18:03.308851  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:18:03.308901  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:18:03.374665  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:18:03.374792  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:18:03.374887  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:18:03.388049  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:18:03.397156  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:18:03.397267  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:18:03.397355  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:18:03.624812  530747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 22:18:03.988647  530747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 22:18:04.207719  530747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 22:18:04.369148  530747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 22:18:04.533091  530747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 22:18:04.533470  530747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:04.781495  530747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 22:18:04.781896  530747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1202 22:18:05.055068  530747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 22:18:05.269007  530747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 22:18:05.339371  530747 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 22:18:05.339621  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:18:05.517146  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:18:05.863539  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:18:06.326882  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:18:06.463358  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:18:06.983101  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:18:06.983766  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:18:06.989546  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:18:06.998787  530747 out.go:252]   - Booting up control plane ...
	I1202 22:18:06.998960  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:18:06.999088  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:18:06.999437  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:18:07.023294  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:18:07.023746  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:18:07.031501  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:18:07.032911  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:18:07.033179  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:18:07.172448  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:18:07.172569  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 22:21:18.402255  510395 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00114361s
	I1202 22:21:18.402290  510395 kubeadm.go:319] 
	I1202 22:21:18.402400  510395 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:21:18.402462  510395 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:21:18.403019  510395 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:21:18.403038  510395 kubeadm.go:319] 
	I1202 22:21:18.403228  510395 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:21:18.403293  510395 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:21:18.403358  510395 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:21:18.403371  510395 kubeadm.go:319] 
	I1202 22:21:18.408627  510395 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:21:18.409060  510395 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:21:18.409175  510395 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:21:18.409412  510395 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1202 22:21:18.409421  510395 kubeadm.go:319] 
	I1202 22:21:18.409510  510395 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:21:18.409568  510395 kubeadm.go:403] duration metric: took 8m6.498664339s to StartCluster
	I1202 22:21:18.409608  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:21:18.409703  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:21:18.433892  510395 cri.go:89] found id: ""
	I1202 22:21:18.433920  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.433929  510395 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:21:18.433935  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:21:18.433997  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:21:18.458135  510395 cri.go:89] found id: ""
	I1202 22:21:18.458168  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.458177  510395 logs.go:284] No container was found matching "etcd"
	I1202 22:21:18.458184  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:21:18.458251  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:21:18.487703  510395 cri.go:89] found id: ""
	I1202 22:21:18.487726  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.487735  510395 logs.go:284] No container was found matching "coredns"
	I1202 22:21:18.487742  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:21:18.487825  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:21:18.511734  510395 cri.go:89] found id: ""
	I1202 22:21:18.511757  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.511766  510395 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:21:18.511773  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:21:18.511833  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:21:18.535676  510395 cri.go:89] found id: ""
	I1202 22:21:18.535701  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.535710  510395 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:21:18.535717  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:21:18.535778  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:21:18.608686  510395 cri.go:89] found id: ""
	I1202 22:21:18.608714  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.608733  510395 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:21:18.608740  510395 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:21:18.608810  510395 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:21:18.636332  510395 cri.go:89] found id: ""
	I1202 22:21:18.636357  510395 logs.go:282] 0 containers: []
	W1202 22:21:18.636366  510395 logs.go:284] No container was found matching "kindnet"
	I1202 22:21:18.636377  510395 logs.go:123] Gathering logs for container status ...
	I1202 22:21:18.636389  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:21:18.666396  510395 logs.go:123] Gathering logs for kubelet ...
	I1202 22:21:18.666423  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:21:18.724901  510395 logs.go:123] Gathering logs for dmesg ...
	I1202 22:21:18.724937  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:21:18.740835  510395 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:21:18.740863  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:21:18.806977  510395 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:21:18.799661    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.800223    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.801707    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.802161    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:21:18.803571    5435 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:21:18.806999  510395 logs.go:123] Gathering logs for containerd ...
	I1202 22:21:18.807011  510395 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	W1202 22:21:18.849559  510395 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:21:18.849624  510395 out.go:285] * 
	W1202 22:21:18.849687  510395 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.849738  510395 out.go:285] * 
	W1202 22:21:18.851859  510395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:21:18.857885  510395 out.go:203] 
	W1202 22:21:18.861761  510395 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00114361s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:21:18.861806  510395 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:21:18.861829  510395 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:21:18.865566  510395 out.go:203] 
	I1202 22:22:07.172189  530747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000061877s
	I1202 22:22:07.172421  530747 kubeadm.go:319] 
	I1202 22:22:07.172498  530747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:22:07.172536  530747 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:22:07.172651  530747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:22:07.172657  530747 kubeadm.go:319] 
	I1202 22:22:07.172769  530747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:22:07.172808  530747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:22:07.172839  530747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:22:07.172843  530747 kubeadm.go:319] 
	I1202 22:22:07.176868  530747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:22:07.177410  530747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:22:07.177535  530747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:22:07.177870  530747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:22:07.177893  530747 kubeadm.go:319] 
	I1202 22:22:07.178011  530747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1202 22:22:07.178084  530747 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250247] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000061877s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1202 22:22:07.178174  530747 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I1202 22:22:07.582154  530747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 22:22:07.596186  530747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:22:07.596254  530747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:22:07.604575  530747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:22:07.604597  530747 kubeadm.go:158] found existing configuration files:
	
	I1202 22:22:07.604653  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:22:07.612860  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:22:07.612925  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:22:07.620756  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:22:07.628603  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:22:07.628670  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:22:07.636283  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:22:07.644222  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:22:07.644282  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:22:07.651905  530747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:22:07.659956  530747 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:22:07.660066  530747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:22:07.667857  530747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:22:07.708384  530747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1202 22:22:07.708648  530747 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:22:07.776064  530747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:22:07.776184  530747 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:22:07.776244  530747 kubeadm.go:319] OS: Linux
	I1202 22:22:07.776341  530747 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:22:07.776426  530747 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:22:07.776506  530747 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:22:07.776581  530747 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:22:07.776644  530747 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:22:07.776713  530747 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:22:07.776776  530747 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:22:07.776870  530747 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:22:07.776937  530747 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:22:07.856435  530747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:22:07.856606  530747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:22:07.856736  530747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:22:07.863429  530747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 22:22:07.868636  530747 out.go:252]   - Generating certificates and keys ...
	I1202 22:22:07.868735  530747 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 22:22:07.868847  530747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 22:22:07.868963  530747 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1202 22:22:07.869041  530747 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1202 22:22:07.869134  530747 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1202 22:22:07.869207  530747 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1202 22:22:07.869308  530747 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1202 22:22:07.869882  530747 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1202 22:22:07.870407  530747 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1202 22:22:07.870926  530747 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1202 22:22:07.871398  530747 kubeadm.go:319] [certs] Using the existing "sa" key
	I1202 22:22:07.871624  530747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 22:22:08.110111  530747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 22:22:08.319444  530747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 22:22:08.500616  530747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 22:22:08.835962  530747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 22:22:09.133922  530747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 22:22:09.134451  530747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 22:22:09.137223  530747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 22:22:09.140286  530747 out.go:252]   - Booting up control plane ...
	I1202 22:22:09.140379  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 22:22:09.140452  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 22:22:09.141569  530747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 22:22:09.162438  530747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 22:22:09.162564  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 22:22:09.170170  530747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 22:22:09.170681  530747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 22:22:09.170905  530747 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 22:22:09.303893  530747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 22:22:09.304013  530747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:13:00 no-preload-904303 containerd[758]: time="2025-12-02T22:13:00.056487043Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.309327150Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.311580305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.319050815Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:02 no-preload-904303 containerd[758]: time="2025-12-02T22:13:02.320996139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.675616176Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.678085817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698352226Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:03 no-preload-904303 containerd[758]: time="2025-12-02T22:13:03.698661001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.162074992Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.178922518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.243438144Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:05 no-preload-904303 containerd[758]: time="2025-12-02T22:13:05.244266609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.858893547Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.861151683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.869754107Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:07 no-preload-904303 containerd[758]: time="2025-12-02T22:13:07.870787170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.086495518Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.094498321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.103914819Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.106075735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.551856390Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.553967518Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561245033Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:13:09 no-preload-904303 containerd[758]: time="2025-12-02T22:13:09.561946477Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:23:20.344037    6926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:23:20.344745    6926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:23:20.346360    6926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:23:20.346949    6926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:23:20.348490    6926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:23:20 up  4:05,  0 user,  load average: 0.49, 0.92, 1.41
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:23:17 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:23:17 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 479.
	Dec 02 22:23:17 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:17 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:17 no-preload-904303 kubelet[6808]: E1202 22:23:17.839237    6808 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:23:17 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:23:17 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:23:18 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 02 22:23:18 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:18 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:18 no-preload-904303 kubelet[6813]: E1202 22:23:18.584867    6813 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:23:18 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:23:18 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:23:19 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 02 22:23:19 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:19 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:19 no-preload-904303 kubelet[6823]: E1202 22:23:19.348103    6823 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:23:19 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:23:19 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:23:20 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 02 22:23:20 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:20 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:23:20 no-preload-904303 kubelet[6856]: E1202 22:23:20.103854    6856 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:23:20 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:23:20 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 6 (368.966166ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:23:20.846385  539300 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (117.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (370.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 22:24:00.888260  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:24:44.122896  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m8.475397548s)

                                                
                                                
-- stdout --
	* [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:23:22.383311  539599 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:23:22.383495  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.383526  539599 out.go:374] Setting ErrFile to fd 2...
	I1202 22:23:22.383548  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.384147  539599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:23:22.384563  539599 out.go:368] Setting JSON to false
	I1202 22:23:22.385463  539599 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14741,"bootTime":1764699462,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:23:22.385557  539599 start.go:143] virtualization:  
	I1202 22:23:22.388730  539599 out.go:179] * [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:23:22.392696  539599 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:23:22.392797  539599 notify.go:221] Checking for updates...
	I1202 22:23:22.398478  539599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:23:22.401214  539599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:22.404143  539599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:23:22.406933  539599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:23:22.409907  539599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:23:22.413394  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:22.413984  539599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:23:22.437751  539599 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:23:22.437859  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.494629  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.485394686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.494770  539599 docker.go:319] overlay module found
	I1202 22:23:22.499692  539599 out.go:179] * Using the docker driver based on existing profile
	I1202 22:23:22.502553  539599 start.go:309] selected driver: docker
	I1202 22:23:22.502578  539599 start.go:927] validating driver "docker" against &{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.502679  539599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:23:22.503398  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.557969  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.549356271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.558338  539599 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:23:22.558370  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:22.558425  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:22.558467  539599 start.go:353] cluster config:
	{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.561606  539599 out.go:179] * Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	I1202 22:23:22.564413  539599 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:23:22.567370  539599 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:23:22.570230  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:22.570321  539599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:23:22.570386  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.570675  539599 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570753  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:23:22.570769  539599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.774µs
	I1202 22:23:22.570784  539599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:23:22.570800  539599 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570834  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:23:22.570844  539599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 44.823µs
	I1202 22:23:22.570850  539599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570866  539599 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570898  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:23:22.570907  539599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.493µs
	I1202 22:23:22.570915  539599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570926  539599 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570958  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:23:22.570975  539599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.838µs
	I1202 22:23:22.570982  539599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570991  539599 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571040  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:23:22.571049  539599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 59.452µs
	I1202 22:23:22.571055  539599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:23:22.571064  539599 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571094  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:23:22.571103  539599 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.457µs
	I1202 22:23:22.571108  539599 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:23:22.571117  539599 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571146  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:23:22.571154  539599 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 38.03µs
	I1202 22:23:22.571159  539599 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:23:22.571168  539599 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571197  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:23:22.571205  539599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.12µs
	I1202 22:23:22.571211  539599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:23:22.571217  539599 cache.go:87] Successfully saved all images to host disk.
	I1202 22:23:22.590450  539599 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:23:22.590474  539599 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:23:22.590493  539599 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:23:22.590523  539599 start.go:360] acquireMachinesLock for no-preload-904303: {Name:mk2c72bf119f004a39efee961482984889590787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.590579  539599 start.go:364] duration metric: took 35.757µs to acquireMachinesLock for "no-preload-904303"
	I1202 22:23:22.590604  539599 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:23:22.590613  539599 fix.go:54] fixHost starting: 
	I1202 22:23:22.590870  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.607708  539599 fix.go:112] recreateIfNeeded on no-preload-904303: state=Stopped err=<nil>
	W1202 22:23:22.607739  539599 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:23:22.611134  539599 out.go:252] * Restarting existing docker container for "no-preload-904303" ...
	I1202 22:23:22.611232  539599 cli_runner.go:164] Run: docker start no-preload-904303
	I1202 22:23:22.872287  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.898165  539599 kic.go:430] container "no-preload-904303" state is running.
	I1202 22:23:22.898575  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:22.922950  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.923163  539599 machine.go:94] provisionDockerMachine start ...
	I1202 22:23:22.923221  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:22.941136  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:22.941823  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:22.941839  539599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:23:22.942552  539599 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 22:23:26.093608  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.093635  539599 ubuntu.go:182] provisioning hostname "no-preload-904303"
	I1202 22:23:26.093723  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.111423  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.111760  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.111780  539599 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-904303 && echo "no-preload-904303" | sudo tee /etc/hostname
	I1202 22:23:26.266533  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.266609  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.285316  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.285632  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.285686  539599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-904303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-904303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-904303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:23:26.434344  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:23:26.434376  539599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:23:26.434408  539599 ubuntu.go:190] setting up certificates
	I1202 22:23:26.434418  539599 provision.go:84] configureAuth start
	I1202 22:23:26.434484  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:26.452411  539599 provision.go:143] copyHostCerts
	I1202 22:23:26.452491  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:23:26.452511  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:23:26.452589  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:23:26.452740  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:23:26.452752  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:23:26.452788  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:23:26.452857  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:23:26.452867  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:23:26.452891  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:23:26.452955  539599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.no-preload-904303 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-904303]
	I1202 22:23:26.957849  539599 provision.go:177] copyRemoteCerts
	I1202 22:23:26.957926  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:23:26.957991  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.974993  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.078480  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:23:27.095667  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:23:27.113042  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:23:27.131355  539599 provision.go:87] duration metric: took 696.919514ms to configureAuth
	I1202 22:23:27.131382  539599 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:23:27.131619  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:27.131635  539599 machine.go:97] duration metric: took 4.208463625s to provisionDockerMachine
	I1202 22:23:27.131645  539599 start.go:293] postStartSetup for "no-preload-904303" (driver="docker")
	I1202 22:23:27.131661  539599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:23:27.131736  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:23:27.131781  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.148639  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.253719  539599 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:23:27.257164  539599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:23:27.257191  539599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:23:27.257209  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:23:27.257270  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:23:27.257353  539599 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:23:27.257455  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:23:27.264919  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:27.281885  539599 start.go:296] duration metric: took 150.211376ms for postStartSetup
	I1202 22:23:27.281973  539599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:23:27.282024  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.311997  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.415055  539599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:23:27.419717  539599 fix.go:56] duration metric: took 4.829096727s for fixHost
	I1202 22:23:27.419743  539599 start.go:83] releasing machines lock for "no-preload-904303", held for 4.829150862s
	I1202 22:23:27.419810  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:27.437406  539599 ssh_runner.go:195] Run: cat /version.json
	I1202 22:23:27.437474  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.437744  539599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:23:27.437810  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.458622  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.458779  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.657246  539599 ssh_runner.go:195] Run: systemctl --version
	I1202 22:23:27.663712  539599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:23:27.667996  539599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:23:27.668128  539599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:23:27.676170  539599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:23:27.676194  539599 start.go:496] detecting cgroup driver to use...
	I1202 22:23:27.676226  539599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:23:27.676274  539599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:23:27.693769  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:23:27.707725  539599 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:23:27.707786  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:23:27.723493  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:23:27.736424  539599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:23:27.854661  539599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:23:27.963950  539599 docker.go:234] disabling docker service ...
	I1202 22:23:27.964063  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:23:27.978913  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:23:27.991719  539599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:23:28.130013  539599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:23:28.245844  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:23:28.260063  539599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:23:28.275361  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:23:28.284485  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:23:28.293418  539599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:23:28.293496  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:23:28.303246  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.311801  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:23:28.320376  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.329142  539599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:23:28.337281  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:23:28.345966  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:23:28.354511  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:23:28.364146  539599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:23:28.372643  539599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:23:28.380193  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.515073  539599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:23:28.603166  539599 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:23:28.603246  539599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:23:28.607302  539599 start.go:564] Will wait 60s for crictl version
	I1202 22:23:28.607362  539599 ssh_runner.go:195] Run: which crictl
	I1202 22:23:28.610925  539599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:23:28.635209  539599 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:23:28.635324  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.654862  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.679684  539599 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:23:28.682772  539599 cli_runner.go:164] Run: docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:23:28.698164  539599 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:23:28.701843  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.711778  539599 kubeadm.go:884] updating cluster {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:23:28.711898  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:28.711951  539599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:23:28.735775  539599 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:23:28.735798  539599 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:23:28.735806  539599 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:23:28.735943  539599 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-904303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:23:28.736021  539599 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:23:28.764321  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:28.764345  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:28.764366  539599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:23:28.764390  539599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-904303 NodeName:no-preload-904303 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:23:28.764517  539599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-904303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:23:28.764598  539599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:23:28.772222  539599 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:23:28.772309  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:23:28.779448  539599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:23:28.793067  539599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:23:28.808283  539599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 22:23:28.821081  539599 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:23:28.825311  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.834378  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.950783  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:28.967883  539599 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303 for IP: 192.168.76.2
	I1202 22:23:28.967905  539599 certs.go:195] generating shared ca certs ...
	I1202 22:23:28.967921  539599 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:28.968118  539599 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:23:28.968196  539599 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:23:28.968211  539599 certs.go:257] generating profile certs ...
	I1202 22:23:28.968343  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key
	I1202 22:23:28.968433  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d
	I1202 22:23:28.968505  539599 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key
	I1202 22:23:28.968647  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:23:28.968707  539599 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:23:28.968723  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:23:28.968768  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:23:28.968803  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:23:28.968848  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:23:28.968924  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:28.969565  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:23:28.991512  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:23:29.009709  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:23:29.027230  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:23:29.044859  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:23:29.062550  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:23:29.081123  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:23:29.099160  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:23:29.116143  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:23:29.133341  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:23:29.151391  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:23:29.168872  539599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:23:29.181520  539599 ssh_runner.go:195] Run: openssl version
	I1202 22:23:29.187676  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:23:29.196257  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205152  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205469  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.248525  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:23:29.256283  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:23:29.264508  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268219  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268296  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.310186  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:23:29.318336  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:23:29.326934  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330579  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330642  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.371552  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:23:29.379401  539599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:23:29.383174  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:23:29.424449  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:23:29.465479  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:23:29.506825  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:23:29.548324  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:23:29.590054  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:23:29.631250  539599 kubeadm.go:401] StartCluster: {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:29.631343  539599 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:23:29.631409  539599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:23:29.658856  539599 cri.go:89] found id: ""
	I1202 22:23:29.658951  539599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:23:29.666619  539599 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:23:29.666682  539599 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:23:29.666755  539599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:23:29.674368  539599 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:23:29.674844  539599 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.674950  539599 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-904303" cluster setting kubeconfig missing "no-preload-904303" context setting]
	I1202 22:23:29.675336  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.676685  539599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:23:29.684684  539599 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1202 22:23:29.684718  539599 kubeadm.go:602] duration metric: took 18.021774ms to restartPrimaryControlPlane
	I1202 22:23:29.684728  539599 kubeadm.go:403] duration metric: took 53.489812ms to StartCluster
	I1202 22:23:29.684761  539599 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.684832  539599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.685511  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.685828  539599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:23:29.686092  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:29.686168  539599 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:23:29.686270  539599 addons.go:70] Setting storage-provisioner=true in profile "no-preload-904303"
	I1202 22:23:29.686289  539599 addons.go:70] Setting dashboard=true in profile "no-preload-904303"
	I1202 22:23:29.686314  539599 addons.go:239] Setting addon dashboard=true in "no-preload-904303"
	W1202 22:23:29.686329  539599 addons.go:248] addon dashboard should already be in state true
	I1202 22:23:29.686347  539599 addons.go:70] Setting default-storageclass=true in profile "no-preload-904303"
	I1202 22:23:29.686392  539599 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-904303"
	I1202 22:23:29.686365  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.686741  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.687205  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.686300  539599 addons.go:239] Setting addon storage-provisioner=true in "no-preload-904303"
	I1202 22:23:29.687386  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.687839  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.691834  539599 out.go:179] * Verifying Kubernetes components...
	I1202 22:23:29.694973  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:29.723676  539599 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:23:29.726567  539599 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:23:29.727985  539599 addons.go:239] Setting addon default-storageclass=true in "no-preload-904303"
	I1202 22:23:29.728025  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.728437  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.730468  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:23:29.730497  539599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:23:29.730574  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.767956  539599 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:23:29.773779  539599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.773811  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:23:29.773880  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.776990  539599 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:29.777009  539599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:23:29.777075  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.799621  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.808373  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.833909  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.941913  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:29.974317  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.986732  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:23:29.986766  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:23:29.988626  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:30.055576  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:23:30.055607  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:23:30.079790  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:23:30.079826  539599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:23:30.094823  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:23:30.094847  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:23:30.108744  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:23:30.108770  539599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:23:30.122434  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:23:30.122505  539599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:23:30.136402  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:23:30.136428  539599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:23:30.149357  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:23:30.149379  539599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:23:30.162415  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.162439  539599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:23:30.175846  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.793328  539599 node_ready.go:35] waiting up to 6m0s for node "no-preload-904303" to be "Ready" ...
	W1202 22:23:30.793609  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793677  539599 retry.go:31] will retry after 226.751663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793357  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793705  539599 retry.go:31] will retry after 335.186857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793399  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793714  539599 retry.go:31] will retry after 230.72192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.021321  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.024967  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.129921  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.131942  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.131973  539599 retry.go:31] will retry after 517.463505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.159634  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.159717  539599 retry.go:31] will retry after 524.371625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.201804  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.201874  539599 retry.go:31] will retry after 509.080585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.649705  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.685138  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.711681  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.715443  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.715487  539599 retry.go:31] will retry after 516.235738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.771458  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.771492  539599 retry.go:31] will retry after 380.898006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.788553  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.788587  539599 retry.go:31] will retry after 774.998834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.153620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:32.209503  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.209561  539599 retry.go:31] will retry after 823.770631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.232894  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:32.305176  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.305224  539599 retry.go:31] will retry after 976.715215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.563746  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:32.634445  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.634479  539599 retry.go:31] will retry after 1.162769509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:32.794321  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:33.033893  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.114977  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.115010  539599 retry.go:31] will retry after 714.879346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.282251  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:33.363554  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.363588  539599 retry.go:31] will retry after 844.770065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.798288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:33.830720  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.885889  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.885960  539599 retry.go:31] will retry after 916.714322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:33.923753  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.923789  539599 retry.go:31] will retry after 2.520575053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.209208  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:34.271506  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.271542  539599 retry.go:31] will retry after 1.776064467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.803362  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:34.881750  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.881784  539599 retry.go:31] will retry after 1.907866633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:35.294715  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:36.048128  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:36.141002  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.141036  539599 retry.go:31] will retry after 3.038923278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.444914  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:36.502465  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.502494  539599 retry.go:31] will retry after 3.727542871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.789806  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:36.867898  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.867931  539599 retry.go:31] will retry after 1.939289637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:37.294882  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:38.808288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:38.866824  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:38.866857  539599 retry.go:31] will retry after 5.857922191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.180195  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:39.238103  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.238139  539599 retry.go:31] will retry after 4.546361483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:39.794712  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:40.230300  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:40.298135  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:40.298168  539599 retry.go:31] will retry after 2.477378234s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:42.294051  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:42.775949  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:42.856429  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:42.856458  539599 retry.go:31] will retry after 3.440810022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.784770  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:43.864473  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.864510  539599 retry.go:31] will retry after 7.11067177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:44.294480  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:44.725002  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:44.781836  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:44.781872  539599 retry.go:31] will retry after 4.295308457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:46.294868  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:46.298023  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:46.357922  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:46.357955  539599 retry.go:31] will retry after 9.581881684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:48.793879  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:49.077320  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:49.140226  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:49.140259  539599 retry.go:31] will retry after 6.825419406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:50.976239  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:51.036594  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:51.036631  539599 retry.go:31] will retry after 6.351616515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:51.293979  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:53.294465  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:55.294759  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:55.941027  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:55.966502  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:56.026736  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.026772  539599 retry.go:31] will retry after 11.682115483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:56.034530  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.034568  539599 retry.go:31] will retry after 21.573683328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.388457  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:57.448613  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.448645  539599 retry.go:31] will retry after 10.383504228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:57.794117  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:00.314267  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:02.794140  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:05.294024  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.709736  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:07.771482  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.771516  539599 retry.go:31] will retry after 18.342032468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:07.793883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.833189  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:07.897703  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.897737  539599 retry.go:31] will retry after 18.464738845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:09.794642  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:12.294662  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:14.793819  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:17.294881  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:17.608495  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:17.666726  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:17.666760  539599 retry.go:31] will retry after 22.163689128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:19.794151  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:22.293956  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:24.793964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:26.114427  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:26.175665  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.175697  539599 retry.go:31] will retry after 38.633031501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.363620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:26.423962  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.423998  539599 retry.go:31] will retry after 35.128284125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:27.293903  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:29.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:31.794841  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:34.294771  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:36.793951  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:38.794027  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:39.831556  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:39.903792  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:39.903830  539599 retry.go:31] will retry after 44.791338045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:40.794755  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:43.293945  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:45.294875  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:47.793934  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:50.293988  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:52.794271  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:55.293940  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:57.293989  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:59.294085  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:01.552593  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:25:01.623403  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:01.623527  539599 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:01.794122  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:04.293962  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:04.809098  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:25:04.882204  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:04.882306  539599 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:06.793963  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:09.293949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:11.294016  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:13.793954  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:16.293941  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:18.794012  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:21.293858  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:23.293935  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:24.695432  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:25:24.758224  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:24.758320  539599 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:25:24.761124  539599 out.go:179] * Enabled addons: 
	I1202 22:25:24.763849  539599 addons.go:530] duration metric: took 1m55.077683231s for enable addons: enabled=[]
	W1202 22:25:25.294695  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:27.794152  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:29.794455  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:32.294020  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:34.793867  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:36.794964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:39.293887  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:41.294800  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:43.793975  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:46.294913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:48.794053  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:51.294004  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:53.793974  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:55.794883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:58.294031  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:00.298658  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:02.793984  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:05.294880  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:07.793915  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:10.294520  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:12.294621  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:14.793949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:17.293941  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:19.794051  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:21.794695  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:24.294836  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:26.793853  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:28.793902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:30.794014  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:33.293873  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:35.293924  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:37.294030  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:39.794048  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:42.293975  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:44.294949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:46.793828  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:49.294869  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:51.793896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:54.293838  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:56.294801  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:58.793895  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:00.794580  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:03.293825  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:05.294894  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:07.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:09.794023  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:12.293913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:14.294900  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:16.794731  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:18.794973  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:21.293923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:23.793947  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:26.294900  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:28.793913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:30.793996  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:33.294859  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:35.793879  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:37.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:39.794127  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:42.296591  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:44.794176  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:46.794502  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:48.794763  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:51.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:53.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:56.293967  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:58.794850  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:01.294024  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:03.793959  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:05.794868  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:08.293846  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:10.294170  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:12.794132  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:15.294790  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:17.793953  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.293990  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:22.793981  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:27.794055  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:29.794270  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:32.293942  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:34.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:36.794018  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:39.293843  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:41.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:43.294289  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:45.294597  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:47.294896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:49.794091  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:51.794902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:54.293970  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:56.294081  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:58.793961  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:01.293995  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:03.793972  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:05.794096  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:08.294013  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:10.794007  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:13.294025  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:15.301168  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:17.794066  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:20.293905  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:22.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:24.793885  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:26.794021  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:28.794762  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:30.793709  539599 node_ready.go:38] duration metric: took 6m0.000289785s for node "no-preload-904303" to be "Ready" ...
	I1202 22:29:30.796935  539599 out.go:203] 
	W1202 22:29:30.799794  539599 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 22:29:30.799816  539599 out.go:285] * 
	* 
	W1202 22:29:30.802151  539599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:29:30.804961  539599 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 539728,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:23:22.642400086Z",
	            "FinishedAt": "2025-12-02T22:23:21.316417439Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2c027d5072096e798c0b710c59b479b1cd1269246af142ef5e7ac6eb2231d21",
	            "SandboxKey": "/var/run/docker/netns/b2c027d50720",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:71:1d:c1:74:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "d640ee5b3f22cc33822a769221598d10c33902fafb82f4150c227e00cda4eee4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 2 (346.200383ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:26 UTC │                     │
	│ stop    │ -p newest-cni-250247 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-250247 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:28:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:28:09.982860  546345 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:09.982990  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983001  546345 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:09.983006  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983258  546345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:28:09.983629  546345 out.go:368] Setting JSON to false
	I1202 22:28:09.984474  546345 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15028,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:28:09.984540  546345 start.go:143] virtualization:  
	I1202 22:28:09.987326  546345 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:28:09.991071  546345 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:28:09.991190  546345 notify.go:221] Checking for updates...
	I1202 22:28:09.996957  546345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:28:09.999951  546345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:10.003165  546345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:28:10.010024  546345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:28:10.023215  546345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:28:10.026934  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:10.027740  546345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:28:10.065520  546345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:28:10.065629  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.146197  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.137008488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.146302  546345 docker.go:319] overlay module found
	I1202 22:28:10.149701  546345 out.go:179] * Using the docker driver based on existing profile
	I1202 22:28:10.152553  546345 start.go:309] selected driver: docker
	I1202 22:28:10.152579  546345 start.go:927] validating driver "docker" against &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.152714  546345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:28:10.153449  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.206765  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.197797072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.207092  546345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:28:10.207126  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:10.207191  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:10.207234  546345 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.210373  546345 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:28:10.213164  546345 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:28:10.216139  546345 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:28:10.218905  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:10.218974  546345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:28:10.241012  546345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:28:10.241034  546345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:28:10.277912  546345 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:28:10.461684  546345 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:28:10.461922  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.461950  546345 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462038  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:28:10.462049  546345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.248µs
	I1202 22:28:10.462062  546345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:28:10.462074  546345 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462104  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:28:10.462109  546345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.282µs
	I1202 22:28:10.462115  546345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462125  546345 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462157  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:28:10.462162  546345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.727µs
	I1202 22:28:10.462169  546345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462179  546345 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462196  546345 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:28:10.462206  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:28:10.462212  546345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.534µs
	I1202 22:28:10.462218  546345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462227  546345 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462237  546345 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462253  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:28:10.462258  546345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.098µs
	I1202 22:28:10.462265  546345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462274  546345 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462280  546345 start.go:364] duration metric: took 29.16µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:28:10.462305  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:28:10.462305  546345 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:28:10.462319  546345 fix.go:54] fixHost starting: 
	I1202 22:28:10.462321  546345 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462350  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:28:10.462360  546345 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.731µs
	I1202 22:28:10.462365  546345 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:28:10.462378  546345 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462404  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:28:10.462408  546345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.396µs
	I1202 22:28:10.462414  546345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:28:10.462311  546345 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.21µs
	I1202 22:28:10.462504  546345 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:28:10.462515  546345 cache.go:87] Successfully saved all images to host disk.
	I1202 22:28:10.462628  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.483660  546345 fix.go:112] recreateIfNeeded on newest-cni-250247: state=Stopped err=<nil>
	W1202 22:28:10.483692  546345 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 22:28:08.293846  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:10.294170  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:10.487123  546345 out.go:252] * Restarting existing docker container for "newest-cni-250247" ...
	I1202 22:28:10.487212  546345 cli_runner.go:164] Run: docker start newest-cni-250247
	I1202 22:28:10.752920  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.774107  546345 kic.go:430] container "newest-cni-250247" state is running.
	I1202 22:28:10.775430  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:10.803310  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.803660  546345 machine.go:94] provisionDockerMachine start ...
	I1202 22:28:10.803741  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:10.835254  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:10.835574  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:10.835582  546345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:28:10.836341  546345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47630->127.0.0.1:33423: read: connection reset by peer
	I1202 22:28:13.985241  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:13.985267  546345 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:28:13.985331  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.004448  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.004830  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.004852  546345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:28:14.162890  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:14.162970  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.180049  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.180364  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.180385  546345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:28:14.325738  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:28:14.325762  546345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:28:14.325781  546345 ubuntu.go:190] setting up certificates
	I1202 22:28:14.325790  546345 provision.go:84] configureAuth start
	I1202 22:28:14.325861  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:14.342936  546345 provision.go:143] copyHostCerts
	I1202 22:28:14.343009  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:28:14.343017  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:28:14.343091  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:28:14.343188  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:28:14.343193  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:28:14.343217  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:28:14.343264  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:28:14.343269  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:28:14.343292  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:28:14.343342  546345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:28:14.770203  546345 provision.go:177] copyRemoteCerts
	I1202 22:28:14.770270  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:28:14.770310  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.787300  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:14.893004  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:28:14.909339  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:28:14.926255  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:28:14.942726  546345 provision.go:87] duration metric: took 616.921074ms to configureAuth
	I1202 22:28:14.942753  546345 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:28:14.942983  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:14.942996  546345 machine.go:97] duration metric: took 4.139308859s to provisionDockerMachine
	I1202 22:28:14.943006  546345 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:28:14.943017  546345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:28:14.943072  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:28:14.943129  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.960329  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.069600  546345 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:28:15.072888  546345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:28:15.072916  546345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:28:15.072928  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:28:15.073008  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:28:15.073125  546345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:28:15.073236  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:28:15.080571  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:15.098287  546345 start.go:296] duration metric: took 155.265122ms for postStartSetup
	I1202 22:28:15.098433  546345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:28:15.098514  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.116407  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.218632  546345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:28:15.223330  546345 fix.go:56] duration metric: took 4.761004698s for fixHost
	I1202 22:28:15.223357  546345 start.go:83] releasing machines lock for "newest-cni-250247", held for 4.761068204s
	I1202 22:28:15.223423  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:15.240165  546345 ssh_runner.go:195] Run: cat /version.json
	I1202 22:28:15.240226  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.240474  546345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:28:15.240537  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.266111  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.266672  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.465947  546345 ssh_runner.go:195] Run: systemctl --version
	I1202 22:28:15.472302  546345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:28:15.476459  546345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:28:15.476528  546345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:28:15.484047  546345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:28:15.484071  546345 start.go:496] detecting cgroup driver to use...
	I1202 22:28:15.484132  546345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:28:15.484196  546345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:28:15.501336  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:28:15.514809  546345 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:28:15.514870  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:28:15.529978  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:28:15.542949  546345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:28:15.646754  546345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:28:15.763470  546345 docker.go:234] disabling docker service ...
	I1202 22:28:15.763534  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:28:15.778139  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:28:15.790687  546345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:28:15.899099  546345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:28:16.013695  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:28:16.027166  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:28:16.044232  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:28:16.054377  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:28:16.064256  546345 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:28:16.064370  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:28:16.074182  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.083929  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:28:16.093428  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.103465  546345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:28:16.111974  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:28:16.120391  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:28:16.129324  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:28:16.138640  546345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:28:16.146079  546345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:28:16.153383  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.258631  546345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:28:16.349094  546345 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:28:16.349206  546345 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:28:16.353088  546345 start.go:564] Will wait 60s for crictl version
	I1202 22:28:16.353236  546345 ssh_runner.go:195] Run: which crictl
	I1202 22:28:16.356669  546345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:28:16.382942  546345 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:28:16.383050  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.402826  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.429935  546345 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:28:16.432731  546345 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:28:16.448989  546345 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:28:16.452808  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.464968  546345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 22:28:12.794132  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:15.294790  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:16.467854  546345 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:28:16.468035  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:16.468117  546345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:28:16.491782  546345 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:28:16.491805  546345 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:28:16.491813  546345 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:28:16.491914  546345 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:28:16.491984  546345 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:28:16.515416  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:16.515440  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:16.515457  546345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:28:16.515491  546345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:28:16.515606  546345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:28:16.515677  546345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:28:16.522844  546345 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:28:16.522912  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:28:16.529836  546345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:28:16.541819  546345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:28:16.553461  546345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:28:16.565531  546345 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:28:16.569041  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.578309  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.682927  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:16.699616  546345 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:28:16.699641  546345 certs.go:195] generating shared ca certs ...
	I1202 22:28:16.699658  546345 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:16.699787  546345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:28:16.699846  546345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:28:16.699857  546345 certs.go:257] generating profile certs ...
	I1202 22:28:16.699953  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:28:16.700029  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:28:16.700095  546345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:28:16.700208  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:28:16.700249  546345 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:28:16.700262  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:28:16.700295  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:28:16.700323  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:28:16.700356  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:28:16.700412  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:16.701077  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:28:16.721941  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:28:16.740644  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:28:16.759568  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:28:16.776264  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:28:16.794239  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:28:16.814293  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:28:16.833481  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:28:16.852733  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:28:16.870078  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:28:16.886149  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:28:16.902507  546345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:28:16.913942  546345 ssh_runner.go:195] Run: openssl version
	I1202 22:28:16.919938  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:28:16.927825  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931606  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931675  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.974237  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:28:16.981828  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:28:16.989638  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.992999  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.993061  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:28:17.033731  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:28:17.041307  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:28:17.049114  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052710  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052816  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.093368  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:28:17.101039  546345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:28:17.104530  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:28:17.145234  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:28:17.186252  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:28:17.227251  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:28:17.270184  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:28:17.315680  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:28:17.356357  546345 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:17.356449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:28:17.356551  546345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:28:17.384974  546345 cri.go:89] found id: ""
	I1202 22:28:17.385084  546345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:28:17.392914  546345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:28:17.392983  546345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:28:17.393055  546345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:28:17.400365  546345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:28:17.400969  546345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.401222  546345 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250247" cluster setting kubeconfig missing "newest-cni-250247" context setting]
	I1202 22:28:17.401752  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.403065  546345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:28:17.410696  546345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:28:17.410762  546345 kubeadm.go:602] duration metric: took 17.7594ms to restartPrimaryControlPlane
	I1202 22:28:17.410793  546345 kubeadm.go:403] duration metric: took 54.438388ms to StartCluster
	I1202 22:28:17.410829  546345 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.410902  546345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.412749  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.413013  546345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:28:17.416416  546345 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:28:17.416535  546345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-250247"
	I1202 22:28:17.416566  546345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-250247"
	I1202 22:28:17.416596  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:17.416607  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.416873  546345 addons.go:70] Setting dashboard=true in profile "newest-cni-250247"
	I1202 22:28:17.416893  546345 addons.go:239] Setting addon dashboard=true in "newest-cni-250247"
	W1202 22:28:17.416900  546345 addons.go:248] addon dashboard should already be in state true
	I1202 22:28:17.416923  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.417319  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.417762  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.418220  546345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-250247"
	I1202 22:28:17.418240  546345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250247"
	I1202 22:28:17.418515  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.421722  546345 out.go:179] * Verifying Kubernetes components...
	I1202 22:28:17.424546  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:17.473567  546345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:28:17.473567  546345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:28:17.475110  546345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-250247"
	I1202 22:28:17.475145  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.475548  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.477614  546345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.477633  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:28:17.477833  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.481801  546345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:28:17.489727  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:28:17.489757  546345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:28:17.489831  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.519689  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.519729  546345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.519742  546345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:28:17.519796  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.551180  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.565506  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.644850  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:17.726531  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.763912  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.792014  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:28:17.792042  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:28:17.824225  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:28:17.824250  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:28:17.838468  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:28:17.838492  546345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:28:17.851940  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:28:17.851965  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:28:17.864211  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:28:17.864276  546345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:28:17.876057  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:28:17.876079  546345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:28:17.887797  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:28:17.887867  546345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:28:17.899526  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:28:17.899547  546345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:28:17.911602  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:17.911626  546345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:28:17.923996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:18.303299  546345 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:28:18.303418  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:18.303565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303612  546345 retry.go:31] will retry after 133.710161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303717  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303748  546345 retry.go:31] will retry after 138.021594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303974  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.304008  546345 retry.go:31] will retry after 237.208538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.438371  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:18.442705  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:18.512074  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.512108  546345 retry.go:31] will retry after 489.996663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.521184  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.521218  546345 retry.go:31] will retry after 506.041741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.542348  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:18.605737  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.605775  546345 retry.go:31] will retry after 347.613617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.804191  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:18.953629  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:19.003207  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.021755  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.021793  546345 retry.go:31] will retry after 285.211473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.028084  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:19.152805  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.152839  546345 retry.go:31] will retry after 301.33995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:19.169007  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.169038  546345 retry.go:31] will retry after 787.522923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.304323  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.307756  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:19.364720  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.364752  546345 retry.go:31] will retry after 744.498002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.454779  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.514605  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.514684  546345 retry.go:31] will retry after 936.080491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.803793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.957439  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:17.793953  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.293990  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.022370  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.022406  546345 retry.go:31] will retry after 798.963887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.109555  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:20.176777  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.176873  546345 retry.go:31] will retry after 799.677911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.303906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.451319  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:20.513056  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.513087  546345 retry.go:31] will retry after 774.001274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.804493  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.822263  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.884574  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.884663  546345 retry.go:31] will retry after 1.794003449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.976884  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:21.043200  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.043233  546345 retry.go:31] will retry after 2.577364105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.287368  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:21.303812  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:21.396263  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.396297  546345 retry.go:31] will retry after 1.406655136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.803778  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.303682  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.678940  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.734117  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.734151  546345 retry.go:31] will retry after 2.241021271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.803453  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:22.803660  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:22.908987  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.909065  546345 retry.go:31] will retry after 2.592452064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.304587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:23.621298  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:23.681960  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.681992  546345 retry.go:31] will retry after 4.002263162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.804126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.303637  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.803614  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.976147  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.793981  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.036436  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.036470  546345 retry.go:31] will retry after 3.520246776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.303592  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:25.502542  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:25.567000  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.567033  546345 retry.go:31] will retry after 5.323254411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.804224  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.304369  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.303952  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.684919  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:27.748186  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.748220  546345 retry.go:31] will retry after 5.733866836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.804400  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.304209  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.556915  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:28.614437  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.614469  546345 retry.go:31] will retry after 5.59146354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.803555  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.303563  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.803564  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:27.794055  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:29.794270  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:32.293942  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:30.304278  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.891315  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:30.954133  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:30.954165  546345 retry.go:31] will retry after 6.008326018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:31.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:31.803766  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.304456  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.804272  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.304447  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.482755  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:33.544609  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.544640  546345 retry.go:31] will retry after 5.236447557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.804125  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.206989  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:34.267528  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.267562  546345 retry.go:31] will retry after 5.128568146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.804011  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:34.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:36.794018  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:35.304181  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.803881  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.304159  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.804539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.963637  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:37.037814  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.037848  546345 retry.go:31] will retry after 8.195284378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.304208  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:37.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.303552  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.781347  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:38.803757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:38.846454  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:38.846487  546345 retry.go:31] will retry after 10.92120738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.304100  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:39.396834  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:39.454859  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.454893  546345 retry.go:31] will retry after 6.04045657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.804469  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:39.293843  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:41.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:40.303596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.804541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.303922  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.803906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.304508  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.804313  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.304463  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.803539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.304169  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.803620  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:43.294289  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:45.294597  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:47.294896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:45.235996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:45.303907  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:45.410878  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.410909  546345 retry.go:31] will retry after 9.368309576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.496112  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:45.553672  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.553705  546345 retry.go:31] will retry after 7.750202952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.804015  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.303559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.804327  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.303603  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.804053  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.303550  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.803634  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.303688  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.768489  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:49.804064  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:49.895914  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:49.895948  546345 retry.go:31] will retry after 11.070404971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:49.794091  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:51.794902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:50.304462  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:50.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.304256  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.804118  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.304451  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.804096  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.303837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.304041  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:53.361880  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.361915  546345 retry.go:31] will retry after 21.51867829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.804496  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.303718  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.779367  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:54.803837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:54.852160  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:54.852195  546345 retry.go:31] will retry after 25.514460464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:54.293970  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:56.294081  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:55.303807  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:55.804288  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.304329  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.803616  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.303836  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.804152  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.304034  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.803992  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.304109  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.804084  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:58.793961  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:01.293995  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:00.305594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.803492  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.967275  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:01.023919  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.023952  546345 retry.go:31] will retry after 14.799716379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.304168  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:01.804346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.304261  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.803541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.304078  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.804260  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.304145  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:03.793972  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:05.794096  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:05.304303  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.804290  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.304157  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.804297  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.304486  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.803594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.303514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.803514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.304264  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.804046  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:08.294013  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:10.794007  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:10.304151  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.304108  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.803600  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.304520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.804189  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.304155  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.803517  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.304548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.803761  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.881559  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:14.937730  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:14.937760  546345 retry.go:31] will retry after 41.941175985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:13.294025  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:15.301168  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:15.316948  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.804548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.823888  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:15.884943  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.884976  546345 retry.go:31] will retry after 35.611848449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:16.303570  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:16.803687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.304005  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.804234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:17.804335  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:17.829227  546345 cri.go:89] found id: ""
	I1202 22:29:17.829257  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.829265  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:17.829272  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:17.829332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:17.853121  546345 cri.go:89] found id: ""
	I1202 22:29:17.853146  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.853154  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:17.853161  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:17.853219  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:17.877170  546345 cri.go:89] found id: ""
	I1202 22:29:17.877195  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.877204  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:17.877210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:17.877267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:17.904673  546345 cri.go:89] found id: ""
	I1202 22:29:17.904698  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.904707  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:17.904717  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:17.904784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:17.928244  546345 cri.go:89] found id: ""
	I1202 22:29:17.928284  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.928294  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:17.928301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:17.928363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:17.951262  546345 cri.go:89] found id: ""
	I1202 22:29:17.951283  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.951292  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:17.951299  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:17.951363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:17.979941  546345 cri.go:89] found id: ""
	I1202 22:29:17.979971  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.979980  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:17.979987  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:17.980046  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:18.014330  546345 cri.go:89] found id: ""
	I1202 22:29:18.014352  546345 logs.go:282] 0 containers: []
	W1202 22:29:18.014361  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:18.014370  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:18.014382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:18.070623  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:18.070659  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:18.086453  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:18.086483  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:18.147206  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:18.147229  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:18.147242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:18.171557  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:18.171592  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:17.794066  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:20.293905  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:22.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:20.367703  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:29:20.422565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.422597  546345 retry.go:31] will retry after 40.968515426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.701050  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:20.711132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:20.711213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:20.734019  546345 cri.go:89] found id: ""
	I1202 22:29:20.734042  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.734050  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:20.734057  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:20.734114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:20.756521  546345 cri.go:89] found id: ""
	I1202 22:29:20.756546  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.756554  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:20.756561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:20.756620  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:20.787826  546345 cri.go:89] found id: ""
	I1202 22:29:20.787852  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.787869  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:20.787876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:20.787939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:20.811402  546345 cri.go:89] found id: ""
	I1202 22:29:20.811427  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.811435  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:20.811441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:20.811500  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:20.835289  546345 cri.go:89] found id: ""
	I1202 22:29:20.835314  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.835322  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:20.835329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:20.835404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:20.858522  546345 cri.go:89] found id: ""
	I1202 22:29:20.858548  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.858556  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:20.858563  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:20.858622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:20.883759  546345 cri.go:89] found id: ""
	I1202 22:29:20.883783  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.883791  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:20.883798  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:20.883857  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:20.907968  546345 cri.go:89] found id: ""
	I1202 22:29:20.907992  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.908001  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:20.908010  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:20.908020  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:20.962992  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:20.963028  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:20.978472  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:20.978499  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:21.039749  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:21.039771  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:21.039784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:21.064157  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:21.064194  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:23.595745  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:23.606920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:23.606996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:23.633420  546345 cri.go:89] found id: ""
	I1202 22:29:23.633450  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.633459  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:23.633473  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:23.633532  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:23.659559  546345 cri.go:89] found id: ""
	I1202 22:29:23.659581  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.659590  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:23.659596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:23.659663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:23.684986  546345 cri.go:89] found id: ""
	I1202 22:29:23.685010  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.685031  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:23.685039  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:23.685099  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:23.709487  546345 cri.go:89] found id: ""
	I1202 22:29:23.709560  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.709583  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:23.709604  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:23.709734  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:23.734133  546345 cri.go:89] found id: ""
	I1202 22:29:23.734159  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.734167  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:23.734173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:23.734233  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:23.758126  546345 cri.go:89] found id: ""
	I1202 22:29:23.758190  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.758213  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:23.758234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:23.758327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:23.782448  546345 cri.go:89] found id: ""
	I1202 22:29:23.782471  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.782480  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:23.782505  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:23.782579  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:23.806736  546345 cri.go:89] found id: ""
	I1202 22:29:23.806761  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.806770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:23.806780  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:23.806790  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:23.865578  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:23.865619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:23.881434  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:23.881470  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:23.944584  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:23.944606  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:23.944619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:23.970159  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:23.970207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:24.793885  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:26.794021  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:26.498138  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:26.508783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:26.508852  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:26.537015  546345 cri.go:89] found id: ""
	I1202 22:29:26.537037  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.537046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:26.537053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:26.537110  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:26.574312  546345 cri.go:89] found id: ""
	I1202 22:29:26.574339  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.574347  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:26.574354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:26.574411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:26.629052  546345 cri.go:89] found id: ""
	I1202 22:29:26.629079  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.629087  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:26.629094  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:26.629150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:26.658217  546345 cri.go:89] found id: ""
	I1202 22:29:26.658251  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.658259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:26.658266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:26.658337  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:26.681717  546345 cri.go:89] found id: ""
	I1202 22:29:26.681751  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.681760  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:26.681778  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:26.681850  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:26.704611  546345 cri.go:89] found id: ""
	I1202 22:29:26.704646  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.704655  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:26.704661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:26.704733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:26.728028  546345 cri.go:89] found id: ""
	I1202 22:29:26.728091  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.728115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:26.728137  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:26.728223  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:26.755557  546345 cri.go:89] found id: ""
	I1202 22:29:26.755582  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.755590  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:26.755600  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:26.755611  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.786053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:26.786080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:26.841068  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:26.841100  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:26.856799  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:26.856829  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:26.924274  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:26.924338  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:26.924358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.449918  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:29.460186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:29.460259  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:29.483893  546345 cri.go:89] found id: ""
	I1202 22:29:29.483915  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.483924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:29.483930  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:29.483990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:29.507973  546345 cri.go:89] found id: ""
	I1202 22:29:29.507999  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.508007  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:29.508013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:29.508073  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:29.532020  546345 cri.go:89] found id: ""
	I1202 22:29:29.532045  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.532054  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:29.532061  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:29.532119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:29.583563  546345 cri.go:89] found id: ""
	I1202 22:29:29.583590  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.583599  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:29.583606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:29.583664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:29.626796  546345 cri.go:89] found id: ""
	I1202 22:29:29.626821  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.626830  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:29.626837  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:29.626910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:29.650151  546345 cri.go:89] found id: ""
	I1202 22:29:29.650179  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.650186  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:29.650193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:29.650254  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:29.677989  546345 cri.go:89] found id: ""
	I1202 22:29:29.678015  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.678023  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:29.678031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:29.678090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:29.707431  546345 cri.go:89] found id: ""
	I1202 22:29:29.707457  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.707465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:29.707475  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:29.707486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:29.773447  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:29.773470  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:29.773484  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.798530  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:29.798604  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:29.825490  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:29.825517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:29.884423  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:29.884461  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1202 22:29:28.794762  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:30.793709  539599 node_ready.go:38] duration metric: took 6m0.000289785s for node "no-preload-904303" to be "Ready" ...
	I1202 22:29:30.796935  539599 out.go:203] 
	W1202 22:29:30.799794  539599 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 22:29:30.799816  539599 out.go:285] * 
	W1202 22:29:30.802151  539599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:29:30.804961  539599 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569235694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569248986Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569265026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569277383Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569295622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569310916Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569325226Z" level=info msg="runtime interface created"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569332503Z" level=info msg="created NRI interface"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569349447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569391611Z" level=info msg="Connect containerd service"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569647005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.570228722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580119279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580197307Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580232687Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580281530Z" level=info msg="Start recovering state"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600449758Z" level=info msg="Start event monitor"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600517374Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600528393Z" level=info msg="Start streaming server"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600540085Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600549102Z" level=info msg="runtime interface starting up..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600555978Z" level=info msg="starting plugins..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600709368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:23:28 no-preload-904303 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.601830385Z" level=info msg="containerd successfully booted in 0.051957s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:31.984586    3950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:31.985094    3950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:31.986784    3950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:31.987425    3950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:31.989030    3950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:29:32 up  4:11,  0 user,  load average: 0.54, 0.66, 1.12
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:29:28 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:29:29 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 480.
	Dec 02 22:29:29 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:29 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:29 no-preload-904303 kubelet[3827]: E1202 22:29:29.607627    3827 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:29:29 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:29:29 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:29:30 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 02 22:29:30 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:30 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:30 no-preload-904303 kubelet[3832]: E1202 22:29:30.357090    3832 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:29:30 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:29:30 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:31 no-preload-904303 kubelet[3838]: E1202 22:29:31.139477    3838 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:31 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:29:31 no-preload-904303 kubelet[3927]: E1202 22:29:31.899302    3927 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:29:31 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 2 (386.543588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (370.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (117.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1202 22:26:17.027261  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:26:28.654443  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:26:44.729645  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:27:36.513632  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:27:47.203193  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:27:50.414588  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.462922398s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250247
helpers_test.go:243: (dbg) docker inspect newest-cni-250247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	        "Created": "2025-12-02T22:17:45.695373395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:17:45.76228908Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2-json.log",
	        "Name": "/newest-cni-250247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	                "LowerDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250247",
	                "Source": "/var/lib/docker/volumes/newest-cni-250247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250247",
	                "name.minikube.sigs.k8s.io": "newest-cni-250247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d0c64ba16bbb08b47bf29cabc5b530a394e75cd494629324cf5f757a6339c21",
	            "SandboxKey": "/var/run/docker/netns/7d0c64ba16bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:c0:2b:98:94:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfffc9981d9cab6ce5981c2e79bfb0dd15ae8455b64d0bfc795000bbbe273d91",
	                    "EndpointID": "fdf2c5f777ff277e526828919e43c78a65f8b5b8ad0c0be50ec029d55e549da2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250247",
	                        "8d631b193c97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 6 (320.283395ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:28:07.213147  545836 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ stop    │ -p embed-certs-716386 --alsologtostderr -v=3                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ addons  │ enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                              │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:14 UTC │
	│ start   │ -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:14 UTC │ 02 Dec 25 22:15 UTC │
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:26 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:23:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:23:22.383311  539599 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:23:22.383495  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.383526  539599 out.go:374] Setting ErrFile to fd 2...
	I1202 22:23:22.383548  539599 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:23:22.384147  539599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:23:22.384563  539599 out.go:368] Setting JSON to false
	I1202 22:23:22.385463  539599 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14741,"bootTime":1764699462,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:23:22.385557  539599 start.go:143] virtualization:  
	I1202 22:23:22.388730  539599 out.go:179] * [no-preload-904303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:23:22.392696  539599 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:23:22.392797  539599 notify.go:221] Checking for updates...
	I1202 22:23:22.398478  539599 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:23:22.401214  539599 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:22.404143  539599 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:23:22.406933  539599 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:23:22.409907  539599 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:23:22.413394  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:22.413984  539599 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:23:22.437751  539599 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:23:22.437859  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.494629  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.485394686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.494770  539599 docker.go:319] overlay module found
	I1202 22:23:22.499692  539599 out.go:179] * Using the docker driver based on existing profile
	I1202 22:23:22.502553  539599 start.go:309] selected driver: docker
	I1202 22:23:22.502578  539599 start.go:927] validating driver "docker" against &{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.502679  539599 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:23:22.503398  539599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:23:22.557969  539599 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:23:22.549356271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:23:22.558338  539599 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:23:22.558370  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:22.558425  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:22.558467  539599 start.go:353] cluster config:
	{Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:22.561606  539599 out.go:179] * Starting "no-preload-904303" primary control-plane node in "no-preload-904303" cluster
	I1202 22:23:22.564413  539599 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:23:22.567370  539599 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:23:22.570230  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:22.570321  539599 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:23:22.570386  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.570675  539599 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570753  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:23:22.570769  539599 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.774µs
	I1202 22:23:22.570784  539599 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:23:22.570800  539599 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570834  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:23:22.570844  539599 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 44.823µs
	I1202 22:23:22.570850  539599 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570866  539599 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570898  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:23:22.570907  539599 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 42.493µs
	I1202 22:23:22.570915  539599 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570926  539599 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.570958  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:23:22.570975  539599 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 42.838µs
	I1202 22:23:22.570982  539599 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:23:22.570991  539599 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571040  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:23:22.571049  539599 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 59.452µs
	I1202 22:23:22.571055  539599 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:23:22.571064  539599 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571094  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:23:22.571103  539599 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.457µs
	I1202 22:23:22.571108  539599 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:23:22.571117  539599 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571146  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:23:22.571154  539599 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 38.03µs
	I1202 22:23:22.571159  539599 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:23:22.571168  539599 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.571197  539599 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:23:22.571205  539599 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 38.12µs
	I1202 22:23:22.571211  539599 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:23:22.571217  539599 cache.go:87] Successfully saved all images to host disk.
	I1202 22:23:22.590450  539599 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:23:22.590474  539599 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:23:22.590493  539599 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:23:22.590523  539599 start.go:360] acquireMachinesLock for no-preload-904303: {Name:mk2c72bf119f004a39efee961482984889590787 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:23:22.590579  539599 start.go:364] duration metric: took 35.757µs to acquireMachinesLock for "no-preload-904303"
	I1202 22:23:22.590604  539599 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:23:22.590613  539599 fix.go:54] fixHost starting: 
	I1202 22:23:22.590870  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.607708  539599 fix.go:112] recreateIfNeeded on no-preload-904303: state=Stopped err=<nil>
	W1202 22:23:22.607739  539599 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:23:22.611134  539599 out.go:252] * Restarting existing docker container for "no-preload-904303" ...
	I1202 22:23:22.611232  539599 cli_runner.go:164] Run: docker start no-preload-904303
	I1202 22:23:22.872287  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:22.898165  539599 kic.go:430] container "no-preload-904303" state is running.
	I1202 22:23:22.898575  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:22.922950  539599 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/config.json ...
	I1202 22:23:22.923163  539599 machine.go:94] provisionDockerMachine start ...
	I1202 22:23:22.923221  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:22.941136  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:22.941823  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:22.941839  539599 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:23:22.942552  539599 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1202 22:23:26.093608  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.093635  539599 ubuntu.go:182] provisioning hostname "no-preload-904303"
	I1202 22:23:26.093723  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.111423  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.111760  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.111780  539599 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-904303 && echo "no-preload-904303" | sudo tee /etc/hostname
	I1202 22:23:26.266533  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-904303
	
	I1202 22:23:26.266609  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.285316  539599 main.go:143] libmachine: Using SSH client type: native
	I1202 22:23:26.285632  539599 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1202 22:23:26.285686  539599 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-904303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-904303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-904303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:23:26.434344  539599 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:23:26.434376  539599 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:23:26.434408  539599 ubuntu.go:190] setting up certificates
	I1202 22:23:26.434418  539599 provision.go:84] configureAuth start
	I1202 22:23:26.434484  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:26.452411  539599 provision.go:143] copyHostCerts
	I1202 22:23:26.452491  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:23:26.452511  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:23:26.452589  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:23:26.452740  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:23:26.452752  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:23:26.452788  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:23:26.452857  539599 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:23:26.452867  539599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:23:26.452891  539599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:23:26.452955  539599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.no-preload-904303 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-904303]
	I1202 22:23:26.957849  539599 provision.go:177] copyRemoteCerts
	I1202 22:23:26.957926  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:23:26.957991  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:26.974993  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.078480  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:23:27.095667  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:23:27.113042  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:23:27.131355  539599 provision.go:87] duration metric: took 696.919514ms to configureAuth
	I1202 22:23:27.131382  539599 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:23:27.131619  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:27.131635  539599 machine.go:97] duration metric: took 4.208463625s to provisionDockerMachine
	I1202 22:23:27.131645  539599 start.go:293] postStartSetup for "no-preload-904303" (driver="docker")
	I1202 22:23:27.131661  539599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:23:27.131736  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:23:27.131781  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.148639  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.253719  539599 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:23:27.257164  539599 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:23:27.257191  539599 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:23:27.257209  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:23:27.257270  539599 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:23:27.257353  539599 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:23:27.257455  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:23:27.264919  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:27.281885  539599 start.go:296] duration metric: took 150.211376ms for postStartSetup
	I1202 22:23:27.281973  539599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:23:27.282024  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.311997  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.415055  539599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:23:27.419717  539599 fix.go:56] duration metric: took 4.829096727s for fixHost
	I1202 22:23:27.419743  539599 start.go:83] releasing machines lock for "no-preload-904303", held for 4.829150862s
	I1202 22:23:27.419810  539599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-904303
	I1202 22:23:27.437406  539599 ssh_runner.go:195] Run: cat /version.json
	I1202 22:23:27.437474  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.437744  539599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:23:27.437810  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:27.458622  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.458779  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:27.657246  539599 ssh_runner.go:195] Run: systemctl --version
	I1202 22:23:27.663712  539599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:23:27.667996  539599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:23:27.668128  539599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:23:27.676170  539599 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:23:27.676194  539599 start.go:496] detecting cgroup driver to use...
	I1202 22:23:27.676226  539599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:23:27.676274  539599 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:23:27.693769  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:23:27.707725  539599 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:23:27.707786  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:23:27.723493  539599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:23:27.736424  539599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:23:27.854661  539599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:23:27.963950  539599 docker.go:234] disabling docker service ...
	I1202 22:23:27.964063  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:23:27.978913  539599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:23:27.991719  539599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:23:28.130013  539599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:23:28.245844  539599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:23:28.260063  539599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:23:28.275361  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:23:28.284485  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:23:28.293418  539599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:23:28.293496  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:23:28.303246  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.311801  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:23:28.320376  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:23:28.329142  539599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:23:28.337281  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:23:28.345966  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:23:28.354511  539599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:23:28.364146  539599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:23:28.372643  539599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:23:28.380193  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.515073  539599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:23:28.603166  539599 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:23:28.603246  539599 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:23:28.607302  539599 start.go:564] Will wait 60s for crictl version
	I1202 22:23:28.607362  539599 ssh_runner.go:195] Run: which crictl
	I1202 22:23:28.610925  539599 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:23:28.635209  539599 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:23:28.635324  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.654862  539599 ssh_runner.go:195] Run: containerd --version
	I1202 22:23:28.679684  539599 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:23:28.682772  539599 cli_runner.go:164] Run: docker network inspect no-preload-904303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:23:28.698164  539599 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1202 22:23:28.701843  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.711778  539599 kubeadm.go:884] updating cluster {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:23:28.711898  539599 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:23:28.711951  539599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:23:28.735775  539599 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:23:28.735798  539599 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:23:28.735806  539599 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:23:28.735943  539599 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-904303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:23:28.736021  539599 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:23:28.764321  539599 cni.go:84] Creating CNI manager for ""
	I1202 22:23:28.764345  539599 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:23:28.764366  539599 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:23:28.764390  539599 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-904303 NodeName:no-preload-904303 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:23:28.764517  539599 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-904303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:23:28.764598  539599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:23:28.772222  539599 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:23:28.772309  539599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:23:28.779448  539599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:23:28.793067  539599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:23:28.808283  539599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
	I1202 22:23:28.821081  539599 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:23:28.825311  539599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:23:28.834378  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:28.950783  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:28.967883  539599 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303 for IP: 192.168.76.2
	I1202 22:23:28.967905  539599 certs.go:195] generating shared ca certs ...
	I1202 22:23:28.967921  539599 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:28.968118  539599 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:23:28.968196  539599 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:23:28.968211  539599 certs.go:257] generating profile certs ...
	I1202 22:23:28.968343  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/client.key
	I1202 22:23:28.968433  539599 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key.c0dba49d
	I1202 22:23:28.968505  539599 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key
	I1202 22:23:28.968647  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:23:28.968707  539599 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:23:28.968723  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:23:28.968768  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:23:28.968803  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:23:28.968848  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:23:28.968924  539599 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:23:28.969565  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:23:28.991512  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:23:29.009709  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:23:29.027230  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:23:29.044859  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:23:29.062550  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:23:29.081123  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:23:29.099160  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/no-preload-904303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:23:29.116143  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:23:29.133341  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:23:29.151391  539599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:23:29.168872  539599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:23:29.181520  539599 ssh_runner.go:195] Run: openssl version
	I1202 22:23:29.187676  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:23:29.196257  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205152  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.205469  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:23:29.248525  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:23:29.256283  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:23:29.264508  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268219  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.268296  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:23:29.310186  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:23:29.318336  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:23:29.326934  539599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330579  539599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.330642  539599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:23:29.371552  539599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:23:29.379401  539599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:23:29.383174  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:23:29.424449  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:23:29.465479  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:23:29.506825  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:23:29.548324  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:23:29.590054  539599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:23:29.631250  539599 kubeadm.go:401] StartCluster: {Name:no-preload-904303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-904303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:23:29.631343  539599 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:23:29.631409  539599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:23:29.658856  539599 cri.go:89] found id: ""
	I1202 22:23:29.658951  539599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:23:29.666619  539599 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:23:29.666682  539599 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:23:29.666755  539599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:23:29.674368  539599 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:23:29.674844  539599 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-904303" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.674950  539599 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-904303" cluster setting kubeconfig missing "no-preload-904303" context setting]
	I1202 22:23:29.675336  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.676685  539599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:23:29.684684  539599 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1202 22:23:29.684718  539599 kubeadm.go:602] duration metric: took 18.021774ms to restartPrimaryControlPlane
	I1202 22:23:29.684728  539599 kubeadm.go:403] duration metric: took 53.489812ms to StartCluster
	I1202 22:23:29.684761  539599 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.684832  539599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:23:29.685511  539599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:23:29.685828  539599 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:23:29.686092  539599 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:23:29.686168  539599 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:23:29.686270  539599 addons.go:70] Setting storage-provisioner=true in profile "no-preload-904303"
	I1202 22:23:29.686289  539599 addons.go:70] Setting dashboard=true in profile "no-preload-904303"
	I1202 22:23:29.686314  539599 addons.go:239] Setting addon dashboard=true in "no-preload-904303"
	W1202 22:23:29.686329  539599 addons.go:248] addon dashboard should already be in state true
	I1202 22:23:29.686347  539599 addons.go:70] Setting default-storageclass=true in profile "no-preload-904303"
	I1202 22:23:29.686392  539599 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-904303"
	I1202 22:23:29.686365  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.686741  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.687205  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.686300  539599 addons.go:239] Setting addon storage-provisioner=true in "no-preload-904303"
	I1202 22:23:29.687386  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.687839  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.691834  539599 out.go:179] * Verifying Kubernetes components...
	I1202 22:23:29.694973  539599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:23:29.723676  539599 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:23:29.726567  539599 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:23:29.727985  539599 addons.go:239] Setting addon default-storageclass=true in "no-preload-904303"
	I1202 22:23:29.728025  539599 host.go:66] Checking if "no-preload-904303" exists ...
	I1202 22:23:29.728437  539599 cli_runner.go:164] Run: docker container inspect no-preload-904303 --format={{.State.Status}}
	I1202 22:23:29.730468  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:23:29.730497  539599 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:23:29.730574  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.767956  539599 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:23:29.773779  539599 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.773811  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:23:29.773880  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.776990  539599 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:29.777009  539599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:23:29.777075  539599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-904303
	I1202 22:23:29.799621  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.808373  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.833909  539599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/no-preload-904303/id_rsa Username:docker}
	I1202 22:23:29.941913  539599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:23:29.974317  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:29.986732  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:23:29.986766  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:23:29.988626  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:30.055576  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:23:30.055607  539599 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:23:30.079790  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:23:30.079826  539599 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:23:30.094823  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:23:30.094847  539599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:23:30.108744  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:23:30.108770  539599 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:23:30.122434  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:23:30.122505  539599 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:23:30.136402  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:23:30.136428  539599 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:23:30.149357  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:23:30.149379  539599 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:23:30.162415  539599 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.162439  539599 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:23:30.175846  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:30.793328  539599 node_ready.go:35] waiting up to 6m0s for node "no-preload-904303" to be "Ready" ...
	W1202 22:23:30.793609  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793677  539599 retry.go:31] will retry after 226.751663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793357  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793705  539599 retry.go:31] will retry after 335.186857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:30.793399  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:30.793714  539599 retry.go:31] will retry after 230.72192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.021321  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.024967  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.129921  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.131942  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.131973  539599 retry.go:31] will retry after 517.463505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.159634  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.159717  539599 retry.go:31] will retry after 524.371625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.201804  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.201874  539599 retry.go:31] will retry after 509.080585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.649705  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:23:31.685138  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:31.711681  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:31.715443  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.715487  539599 retry.go:31] will retry after 516.235738ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.771458  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.771492  539599 retry.go:31] will retry after 380.898006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:31.788553  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:31.788587  539599 retry.go:31] will retry after 774.998834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.153620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:32.209503  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.209561  539599 retry.go:31] will retry after 823.770631ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.232894  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:32.305176  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.305224  539599 retry.go:31] will retry after 976.715215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.563746  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:32.634445  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:32.634479  539599 retry.go:31] will retry after 1.162769509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:32.794321  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:33.033893  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.114977  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.115010  539599 retry.go:31] will retry after 714.879346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.282251  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:33.363554  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.363588  539599 retry.go:31] will retry after 844.770065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.798288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:23:33.830720  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:33.885889  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.885960  539599 retry.go:31] will retry after 916.714322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:33.923753  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:33.923789  539599 retry.go:31] will retry after 2.520575053s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.209208  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:34.271506  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.271542  539599 retry.go:31] will retry after 1.776064467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.803362  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:34.881750  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:34.881784  539599 retry.go:31] will retry after 1.907866633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:35.294715  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:36.048128  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:36.141002  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.141036  539599 retry.go:31] will retry after 3.038923278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.444914  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:36.502465  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.502494  539599 retry.go:31] will retry after 3.727542871s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.789806  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:36.867898  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:36.867931  539599 retry.go:31] will retry after 1.939289637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:37.294882  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:38.808288  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:38.866824  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:38.866857  539599 retry.go:31] will retry after 5.857922191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.180195  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:39.238103  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:39.238139  539599 retry.go:31] will retry after 4.546361483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:39.794712  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:40.230300  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:40.298135  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:40.298168  539599 retry.go:31] will retry after 2.477378234s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:42.294051  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:42.775949  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:42.856429  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:42.856458  539599 retry.go:31] will retry after 3.440810022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.784770  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:43.864473  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:43.864510  539599 retry.go:31] will retry after 7.11067177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:44.294480  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:44.725002  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:44.781836  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:44.781872  539599 retry.go:31] will retry after 4.295308457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:46.294868  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:46.298023  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:23:46.357922  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:46.357955  539599 retry.go:31] will retry after 9.581881684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:48.793879  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:49.077320  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:49.140226  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:49.140259  539599 retry.go:31] will retry after 6.825419406s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:50.976239  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:51.036594  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:51.036631  539599 retry.go:31] will retry after 6.351616515s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:51.293979  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:53.294465  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:23:55.294759  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:23:55.941027  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:23:55.966502  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:23:56.026736  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.026772  539599 retry.go:31] will retry after 11.682115483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:56.034530  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:56.034568  539599 retry.go:31] will retry after 21.573683328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.388457  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:23:57.448613  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:23:57.448645  539599 retry.go:31] will retry after 10.383504228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:23:57.794117  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:00.314267  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:02.794140  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:05.294024  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.709736  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:07.771482  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.771516  539599 retry.go:31] will retry after 18.342032468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:07.793883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:07.833189  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:07.897703  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:07.897737  539599 retry.go:31] will retry after 18.464738845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:09.794642  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:12.294662  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:14.793819  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:17.294881  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:17.608495  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:17.666726  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:17.666760  539599 retry.go:31] will retry after 22.163689128s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:19.794151  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:22.293956  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:24.793964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:26.114427  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:24:26.175665  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.175697  539599 retry.go:31] will retry after 38.633031501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.363620  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:24:26.423962  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:26.423998  539599 retry.go:31] will retry after 35.128284125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:27.293903  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:29.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:31.794841  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:34.294771  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:36.793951  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:38.794027  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:24:39.831556  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:24:39.903792  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:24:39.903830  539599 retry.go:31] will retry after 44.791338045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:24:40.794755  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:43.293945  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:45.294875  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:47.793934  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:50.293988  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:52.794271  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:55.293940  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:57.293989  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:24:59.294085  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:01.552593  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:25:01.623403  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:01.623527  539599 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:01.794122  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:04.293962  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:04.809098  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:25:04.882204  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:04.882306  539599 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1202 22:25:06.793963  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:09.293949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:11.294016  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:13.793954  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:16.293941  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:18.794012  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:21.293858  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:23.293935  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:25:24.695432  539599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:25:24.758224  539599 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:25:24.758320  539599 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:25:24.761124  539599 out.go:179] * Enabled addons: 
	I1202 22:25:24.763849  539599 addons.go:530] duration metric: took 1m55.077683231s for enable addons: enabled=[]
	W1202 22:25:25.294695  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:27.794152  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:29.794455  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:32.294020  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:34.793867  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:36.794964  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:39.293887  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:41.294800  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:43.793975  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:46.294913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:48.794053  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:51.294004  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:53.793974  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:55.794883  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:25:58.294031  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:00.298658  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:02.793984  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:05.294880  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:26:09.304341  530747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000829165s
	I1202 22:26:09.304377  530747 kubeadm.go:319] 
	I1202 22:26:09.304436  530747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1202 22:26:09.304468  530747 kubeadm.go:319] 	- The kubelet is not running
	I1202 22:26:09.304573  530747 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1202 22:26:09.304578  530747 kubeadm.go:319] 
	I1202 22:26:09.304682  530747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1202 22:26:09.304714  530747 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1202 22:26:09.304745  530747 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1202 22:26:09.304749  530747 kubeadm.go:319] 
	I1202 22:26:09.313335  530747 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1202 22:26:09.313922  530747 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1202 22:26:09.314042  530747 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 22:26:09.314323  530747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1202 22:26:09.314329  530747 kubeadm.go:319] 
	I1202 22:26:09.314401  530747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1202 22:26:09.314453  530747 kubeadm.go:403] duration metric: took 8m6.243910977s to StartCluster
	I1202 22:26:09.314492  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:26:09.314550  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:26:09.344686  530747 cri.go:89] found id: ""
	I1202 22:26:09.344749  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.344773  530747 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:26:09.344791  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:26:09.344878  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:26:09.374164  530747 cri.go:89] found id: ""
	I1202 22:26:09.374191  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.374200  530747 logs.go:284] No container was found matching "etcd"
	I1202 22:26:09.374208  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:26:09.374272  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:26:09.397230  530747 cri.go:89] found id: ""
	I1202 22:26:09.397254  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.397263  530747 logs.go:284] No container was found matching "coredns"
	I1202 22:26:09.397269  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:26:09.397328  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:26:09.421960  530747 cri.go:89] found id: ""
	I1202 22:26:09.421985  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.421994  530747 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:26:09.422001  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:26:09.422060  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:26:09.446528  530747 cri.go:89] found id: ""
	I1202 22:26:09.446549  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.446558  530747 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:26:09.446595  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:26:09.446678  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:26:09.471220  530747 cri.go:89] found id: ""
	I1202 22:26:09.471253  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.471262  530747 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:26:09.471268  530747 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:26:09.471341  530747 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:26:09.495256  530747 cri.go:89] found id: ""
	I1202 22:26:09.495280  530747 logs.go:282] 0 containers: []
	W1202 22:26:09.495288  530747 logs.go:284] No container was found matching "kindnet"
	I1202 22:26:09.495298  530747 logs.go:123] Gathering logs for kubelet ...
	I1202 22:26:09.495309  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:26:09.554867  530747 logs.go:123] Gathering logs for dmesg ...
	I1202 22:26:09.554905  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:26:09.573421  530747 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:26:09.573449  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:26:09.657032  530747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:26:09.648318    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.649093    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.650698    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.651305    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:26:09.652859    5437 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:26:09.657063  530747 logs.go:123] Gathering logs for containerd ...
	I1202 22:26:09.657076  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:26:09.698332  530747 logs.go:123] Gathering logs for container status ...
	I1202 22:26:09.698373  530747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:26:09.727969  530747 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1202 22:26:09.728028  530747 out.go:285] * 
	W1202 22:26:09.728078  530747 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.728093  530747 out.go:285] * 
	W1202 22:26:09.730426  530747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:26:09.735441  530747 out.go:203] 
	W1202 22:26:09.738403  530747 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000829165s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1202 22:26:09.738449  530747 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1202 22:26:09.738475  530747 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1202 22:26:09.741577  530747 out.go:203] 
	W1202 22:26:07.793915  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:10.294520  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:12.294621  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:14.793949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:17.293941  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:19.794051  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:21.794695  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:24.294836  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:26.793853  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:28.793902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:30.794014  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:33.293873  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:35.293924  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:37.294030  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:39.794048  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:42.293975  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:44.294949  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:46.793828  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:49.294869  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:51.793896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:54.293838  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:56.294801  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:26:58.793895  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:00.794580  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:03.293825  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:05.294894  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:07.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:09.794023  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:12.293913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:14.294900  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:16.794731  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:18.794973  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:21.293923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:23.793947  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:26.294900  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:28.793913  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:30.793996  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:33.294859  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:35.793879  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:37.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:39.794127  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:42.296591  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:44.794176  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:46.794502  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:48.794763  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:51.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:53.793923  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:56.293967  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:27:58.794850  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:01.294024  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:17:54 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:54.779220611Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.651148199Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.653829279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.668308994Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:55 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:55.669111160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.622984489Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.625140122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.632796897Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:56 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:56.633245999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.640322955Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.642498821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.650351217Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:57 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:57.651336556Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.673716263Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.675881036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.687140143Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:17:58 newest-cni-250247 containerd[756]: time="2025-12-02T22:17:58.687773748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.213301739Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.215802009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.235906934Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.236893889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.731026224Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.733363816Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.742879446Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Dec 02 22:18:00 newest-cni-250247 containerd[756]: time="2025-12-02T22:18:00.743378803Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:28:07.890391    6662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:28:07.891269    6662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:28:07.892171    6662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:28:07.893746    6662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:28:07.894159    6662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:28:07 up  4:10,  0 user,  load average: 0.47, 0.61, 1.15
	Linux newest-cni-250247 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:28:04 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 474.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:05 newest-cni-250247 kubelet[6542]: E1202 22:28:05.091382    6542 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 475.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:05 newest-cni-250247 kubelet[6548]: E1202 22:28:05.832606    6548 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:28:05 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:28:06 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 476.
	Dec 02 22:28:06 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:06 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:06 newest-cni-250247 kubelet[6553]: E1202 22:28:06.584286    6553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:28:06 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:28:06 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:28:07 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 477.
	Dec 02 22:28:07 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:07 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:28:07 newest-cni-250247 kubelet[6578]: E1202 22:28:07.354918    6578 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:28:07 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:28:07 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 6 (323.582424ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 22:28:08.429013  546066 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "newest-cni-250247" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (117.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (373.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m8.544533809s)

                                                
                                                
-- stdout --
	* [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	* Pulling base image v0.0.48-1764169655-21974 ...
	* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:28:09.982860  546345 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:09.982990  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983001  546345 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:09.983006  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983258  546345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:28:09.983629  546345 out.go:368] Setting JSON to false
	I1202 22:28:09.984474  546345 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15028,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:28:09.984540  546345 start.go:143] virtualization:  
	I1202 22:28:09.987326  546345 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:28:09.991071  546345 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:28:09.991190  546345 notify.go:221] Checking for updates...
	I1202 22:28:09.996957  546345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:28:09.999951  546345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:10.003165  546345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:28:10.010024  546345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:28:10.023215  546345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:28:10.026934  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:10.027740  546345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:28:10.065520  546345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:28:10.065629  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.146197  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.137008488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.146302  546345 docker.go:319] overlay module found
	I1202 22:28:10.149701  546345 out.go:179] * Using the docker driver based on existing profile
	I1202 22:28:10.152553  546345 start.go:309] selected driver: docker
	I1202 22:28:10.152579  546345 start.go:927] validating driver "docker" against &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.152714  546345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:28:10.153449  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.206765  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.197797072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.207092  546345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:28:10.207126  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:10.207191  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:10.207234  546345 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.210373  546345 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:28:10.213164  546345 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:28:10.216139  546345 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:28:10.218905  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:10.218974  546345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:28:10.241012  546345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:28:10.241034  546345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:28:10.277912  546345 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:28:10.461684  546345 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:28:10.461922  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.461950  546345 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462038  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:28:10.462049  546345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.248µs
	I1202 22:28:10.462062  546345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:28:10.462074  546345 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462104  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:28:10.462109  546345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.282µs
	I1202 22:28:10.462115  546345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462125  546345 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462157  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:28:10.462162  546345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.727µs
	I1202 22:28:10.462169  546345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462179  546345 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462196  546345 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:28:10.462206  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:28:10.462212  546345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.534µs
	I1202 22:28:10.462218  546345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462227  546345 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462237  546345 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462253  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:28:10.462258  546345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.098µs
	I1202 22:28:10.462265  546345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462274  546345 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462280  546345 start.go:364] duration metric: took 29.16µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:28:10.462305  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:28:10.462305  546345 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:28:10.462319  546345 fix.go:54] fixHost starting: 
	I1202 22:28:10.462321  546345 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462350  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:28:10.462360  546345 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.731µs
	I1202 22:28:10.462365  546345 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:28:10.462378  546345 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462404  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:28:10.462408  546345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.396µs
	I1202 22:28:10.462414  546345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:28:10.462311  546345 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.21µs
	I1202 22:28:10.462504  546345 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:28:10.462515  546345 cache.go:87] Successfully saved all images to host disk.
	I1202 22:28:10.462628  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.483660  546345 fix.go:112] recreateIfNeeded on newest-cni-250247: state=Stopped err=<nil>
	W1202 22:28:10.483692  546345 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 22:28:10.487123  546345 out.go:252] * Restarting existing docker container for "newest-cni-250247" ...
	I1202 22:28:10.487212  546345 cli_runner.go:164] Run: docker start newest-cni-250247
	I1202 22:28:10.752920  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.774107  546345 kic.go:430] container "newest-cni-250247" state is running.
	I1202 22:28:10.775430  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:10.803310  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.803660  546345 machine.go:94] provisionDockerMachine start ...
	I1202 22:28:10.803741  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:10.835254  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:10.835574  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:10.835582  546345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:28:10.836341  546345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47630->127.0.0.1:33423: read: connection reset by peer
	I1202 22:28:13.985241  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:13.985267  546345 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:28:13.985331  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.004448  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.004830  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.004852  546345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:28:14.162890  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:14.162970  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.180049  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.180364  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.180385  546345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:28:14.325738  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:28:14.325762  546345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:28:14.325781  546345 ubuntu.go:190] setting up certificates
	I1202 22:28:14.325790  546345 provision.go:84] configureAuth start
	I1202 22:28:14.325861  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:14.342936  546345 provision.go:143] copyHostCerts
	I1202 22:28:14.343009  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:28:14.343017  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:28:14.343091  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:28:14.343188  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:28:14.343193  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:28:14.343217  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:28:14.343264  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:28:14.343269  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:28:14.343292  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:28:14.343342  546345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:28:14.770203  546345 provision.go:177] copyRemoteCerts
	I1202 22:28:14.770270  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:28:14.770310  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.787300  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:14.893004  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:28:14.909339  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:28:14.926255  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:28:14.942726  546345 provision.go:87] duration metric: took 616.921074ms to configureAuth
	I1202 22:28:14.942753  546345 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:28:14.942983  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:14.942996  546345 machine.go:97] duration metric: took 4.139308859s to provisionDockerMachine
	I1202 22:28:14.943006  546345 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:28:14.943017  546345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:28:14.943072  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:28:14.943129  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.960329  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.069600  546345 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:28:15.072888  546345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:28:15.072916  546345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:28:15.072928  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:28:15.073008  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:28:15.073125  546345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:28:15.073236  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:28:15.080571  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:15.098287  546345 start.go:296] duration metric: took 155.265122ms for postStartSetup
	I1202 22:28:15.098433  546345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:28:15.098514  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.116407  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.218632  546345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:28:15.223330  546345 fix.go:56] duration metric: took 4.761004698s for fixHost
	I1202 22:28:15.223357  546345 start.go:83] releasing machines lock for "newest-cni-250247", held for 4.761068204s
	I1202 22:28:15.223423  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:15.240165  546345 ssh_runner.go:195] Run: cat /version.json
	I1202 22:28:15.240226  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.240474  546345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:28:15.240537  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.266111  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.266672  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.465947  546345 ssh_runner.go:195] Run: systemctl --version
	I1202 22:28:15.472302  546345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:28:15.476459  546345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:28:15.476528  546345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:28:15.484047  546345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:28:15.484071  546345 start.go:496] detecting cgroup driver to use...
	I1202 22:28:15.484132  546345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:28:15.484196  546345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:28:15.501336  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:28:15.514809  546345 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:28:15.514870  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:28:15.529978  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:28:15.542949  546345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:28:15.646754  546345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:28:15.763470  546345 docker.go:234] disabling docker service ...
	I1202 22:28:15.763534  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:28:15.778139  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:28:15.790687  546345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:28:15.899099  546345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:28:16.013695  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:28:16.027166  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:28:16.044232  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:28:16.054377  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:28:16.064256  546345 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:28:16.064370  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:28:16.074182  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.083929  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:28:16.093428  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.103465  546345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:28:16.111974  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:28:16.120391  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:28:16.129324  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:28:16.138640  546345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:28:16.146079  546345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:28:16.153383  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.258631  546345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:28:16.349094  546345 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:28:16.349206  546345 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:28:16.353088  546345 start.go:564] Will wait 60s for crictl version
	I1202 22:28:16.353236  546345 ssh_runner.go:195] Run: which crictl
	I1202 22:28:16.356669  546345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:28:16.382942  546345 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:28:16.383050  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.402826  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.429935  546345 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:28:16.432731  546345 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:28:16.448989  546345 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:28:16.452808  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.464968  546345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1202 22:28:16.467854  546345 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:28:16.468035  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:16.468117  546345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:28:16.491782  546345 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:28:16.491805  546345 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:28:16.491813  546345 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:28:16.491914  546345 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:28:16.491984  546345 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:28:16.515416  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:16.515440  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:16.515457  546345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:28:16.515491  546345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:28:16.515606  546345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:28:16.515677  546345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:28:16.522844  546345 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:28:16.522912  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:28:16.529836  546345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:28:16.541819  546345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:28:16.553461  546345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:28:16.565531  546345 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:28:16.569041  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.578309  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.682927  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:16.699616  546345 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:28:16.699641  546345 certs.go:195] generating shared ca certs ...
	I1202 22:28:16.699658  546345 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:16.699787  546345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:28:16.699846  546345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:28:16.699857  546345 certs.go:257] generating profile certs ...
	I1202 22:28:16.699953  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:28:16.700029  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:28:16.700095  546345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:28:16.700208  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:28:16.700249  546345 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:28:16.700262  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:28:16.700295  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:28:16.700323  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:28:16.700356  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:28:16.700412  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:16.701077  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:28:16.721941  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:28:16.740644  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:28:16.759568  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:28:16.776264  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:28:16.794239  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:28:16.814293  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:28:16.833481  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:28:16.852733  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:28:16.870078  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:28:16.886149  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:28:16.902507  546345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:28:16.913942  546345 ssh_runner.go:195] Run: openssl version
	I1202 22:28:16.919938  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:28:16.927825  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931606  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931675  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.974237  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:28:16.981828  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:28:16.989638  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.992999  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.993061  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:28:17.033731  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:28:17.041307  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:28:17.049114  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052710  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052816  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.093368  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:28:17.101039  546345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:28:17.104530  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:28:17.145234  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:28:17.186252  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:28:17.227251  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:28:17.270184  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:28:17.315680  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:28:17.356357  546345 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:17.356449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:28:17.356551  546345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:28:17.384974  546345 cri.go:89] found id: ""
	I1202 22:28:17.385084  546345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:28:17.392914  546345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:28:17.392983  546345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:28:17.393055  546345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:28:17.400365  546345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:28:17.400969  546345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.401222  546345 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250247" cluster setting kubeconfig missing "newest-cni-250247" context setting]
	I1202 22:28:17.401752  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.403065  546345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:28:17.410696  546345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:28:17.410762  546345 kubeadm.go:602] duration metric: took 17.7594ms to restartPrimaryControlPlane
	I1202 22:28:17.410793  546345 kubeadm.go:403] duration metric: took 54.438388ms to StartCluster
	I1202 22:28:17.410829  546345 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.410902  546345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.412749  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.413013  546345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:28:17.416416  546345 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:28:17.416535  546345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-250247"
	I1202 22:28:17.416566  546345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-250247"
	I1202 22:28:17.416596  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:17.416607  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.416873  546345 addons.go:70] Setting dashboard=true in profile "newest-cni-250247"
	I1202 22:28:17.416893  546345 addons.go:239] Setting addon dashboard=true in "newest-cni-250247"
	W1202 22:28:17.416900  546345 addons.go:248] addon dashboard should already be in state true
	I1202 22:28:17.416923  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.417319  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.417762  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.418220  546345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-250247"
	I1202 22:28:17.418240  546345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250247"
	I1202 22:28:17.418515  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.421722  546345 out.go:179] * Verifying Kubernetes components...
	I1202 22:28:17.424546  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:17.473567  546345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:28:17.473567  546345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:28:17.475110  546345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-250247"
	I1202 22:28:17.475145  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.475548  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.477614  546345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.477633  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:28:17.477833  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.481801  546345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:28:17.489727  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:28:17.489757  546345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:28:17.489831  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.519689  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.519729  546345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.519742  546345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:28:17.519796  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.551180  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.565506  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.644850  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:17.726531  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.763912  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.792014  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:28:17.792042  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:28:17.824225  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:28:17.824250  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:28:17.838468  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:28:17.838492  546345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:28:17.851940  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:28:17.851965  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:28:17.864211  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:28:17.864276  546345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:28:17.876057  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:28:17.876079  546345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:28:17.887797  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:28:17.887867  546345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:28:17.899526  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:28:17.899547  546345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:28:17.911602  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:17.911626  546345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:28:17.923996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:18.303299  546345 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:28:18.303418  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:18.303565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303612  546345 retry.go:31] will retry after 133.710161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303717  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303748  546345 retry.go:31] will retry after 138.021594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303974  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.304008  546345 retry.go:31] will retry after 237.208538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.438371  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:18.442705  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:18.512074  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.512108  546345 retry.go:31] will retry after 489.996663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.521184  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.521218  546345 retry.go:31] will retry after 506.041741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.542348  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:18.605737  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.605775  546345 retry.go:31] will retry after 347.613617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.804191  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:18.953629  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:19.003207  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.021755  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.021793  546345 retry.go:31] will retry after 285.211473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.028084  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:19.152805  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.152839  546345 retry.go:31] will retry after 301.33995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:19.169007  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.169038  546345 retry.go:31] will retry after 787.522923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.304323  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.307756  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:19.364720  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.364752  546345 retry.go:31] will retry after 744.498002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.454779  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.514605  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.514684  546345 retry.go:31] will retry after 936.080491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.803793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.957439  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.022370  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.022406  546345 retry.go:31] will retry after 798.963887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.109555  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:20.176777  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.176873  546345 retry.go:31] will retry after 799.677911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.303906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.451319  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:20.513056  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.513087  546345 retry.go:31] will retry after 774.001274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.804493  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.822263  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.884574  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.884663  546345 retry.go:31] will retry after 1.794003449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.976884  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:21.043200  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.043233  546345 retry.go:31] will retry after 2.577364105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.287368  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:21.303812  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:21.396263  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.396297  546345 retry.go:31] will retry after 1.406655136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.803778  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.303682  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.678940  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.734117  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.734151  546345 retry.go:31] will retry after 2.241021271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.803453  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:22.803660  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:22.908987  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.909065  546345 retry.go:31] will retry after 2.592452064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.304587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:23.621298  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:23.681960  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.681992  546345 retry.go:31] will retry after 4.002263162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.804126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.303637  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.803614  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.976147  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:25.036436  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.036470  546345 retry.go:31] will retry after 3.520246776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.303592  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:25.502542  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:25.567000  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.567033  546345 retry.go:31] will retry after 5.323254411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.804224  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.304369  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.303952  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.684919  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:27.748186  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.748220  546345 retry.go:31] will retry after 5.733866836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.804400  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.304209  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.556915  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:28.614437  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.614469  546345 retry.go:31] will retry after 5.59146354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.803555  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.303563  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.803564  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.304278  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.891315  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:30.954133  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:30.954165  546345 retry.go:31] will retry after 6.008326018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:31.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:31.803766  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.304456  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.804272  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.304447  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.482755  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:33.544609  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.544640  546345 retry.go:31] will retry after 5.236447557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.804125  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.206989  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:34.267528  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.267562  546345 retry.go:31] will retry after 5.128568146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.804011  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.304181  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.803881  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.304159  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.804539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.963637  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:37.037814  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.037848  546345 retry.go:31] will retry after 8.195284378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.304208  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:37.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.303552  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.781347  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:38.803757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:38.846454  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:38.846487  546345 retry.go:31] will retry after 10.92120738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.304100  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:39.396834  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:39.454859  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.454893  546345 retry.go:31] will retry after 6.04045657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.804469  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.303596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.804541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.303922  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.803906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.304508  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.804313  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.304463  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.803539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.304169  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.803620  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:45.235996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:45.303907  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:45.410878  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.410909  546345 retry.go:31] will retry after 9.368309576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.496112  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:45.553672  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.553705  546345 retry.go:31] will retry after 7.750202952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.804015  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.303559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.804327  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.303603  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.804053  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.303550  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.803634  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.303688  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.768489  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:49.804064  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:49.895914  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:49.895948  546345 retry.go:31] will retry after 11.070404971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:50.304462  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:50.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.304256  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.804118  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.304451  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.804096  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.303837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.304041  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:53.361880  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.361915  546345 retry.go:31] will retry after 21.51867829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.804496  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.303718  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.779367  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:54.803837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:54.852160  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:54.852195  546345 retry.go:31] will retry after 25.514460464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:55.303807  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:55.804288  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.304329  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.803616  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.303836  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.804152  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.304034  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.803992  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.304109  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.804084  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.305594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.803492  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.967275  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:01.023919  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.023952  546345 retry.go:31] will retry after 14.799716379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.304168  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:01.804346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.304261  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.803541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.304078  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.804260  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.304145  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.304303  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.804290  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.304157  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.804297  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.304486  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.803594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.303514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.803514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.304264  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.804046  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.304151  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.304108  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.803600  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.304520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.804189  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.304155  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.803517  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.304548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.803761  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.881559  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:14.937730  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:14.937760  546345 retry.go:31] will retry after 41.941175985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.316948  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.804548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.823888  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:15.884943  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.884976  546345 retry.go:31] will retry after 35.611848449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:16.303570  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:16.803687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.304005  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.804234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:17.804335  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:17.829227  546345 cri.go:89] found id: ""
	I1202 22:29:17.829257  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.829265  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:17.829272  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:17.829332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:17.853121  546345 cri.go:89] found id: ""
	I1202 22:29:17.853146  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.853154  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:17.853161  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:17.853219  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:17.877170  546345 cri.go:89] found id: ""
	I1202 22:29:17.877195  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.877204  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:17.877210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:17.877267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:17.904673  546345 cri.go:89] found id: ""
	I1202 22:29:17.904698  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.904707  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:17.904717  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:17.904784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:17.928244  546345 cri.go:89] found id: ""
	I1202 22:29:17.928284  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.928294  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:17.928301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:17.928363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:17.951262  546345 cri.go:89] found id: ""
	I1202 22:29:17.951283  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.951292  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:17.951299  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:17.951363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:17.979941  546345 cri.go:89] found id: ""
	I1202 22:29:17.979971  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.979980  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:17.979987  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:17.980046  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:18.014330  546345 cri.go:89] found id: ""
	I1202 22:29:18.014352  546345 logs.go:282] 0 containers: []
	W1202 22:29:18.014361  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:18.014370  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:18.014382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:18.070623  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:18.070659  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:18.086453  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:18.086483  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:18.147206  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:18.147229  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:18.147242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:18.171557  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:18.171592  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:20.367703  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:29:20.422565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.422597  546345 retry.go:31] will retry after 40.968515426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.701050  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:20.711132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:20.711213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:20.734019  546345 cri.go:89] found id: ""
	I1202 22:29:20.734042  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.734050  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:20.734057  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:20.734114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:20.756521  546345 cri.go:89] found id: ""
	I1202 22:29:20.756546  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.756554  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:20.756561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:20.756620  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:20.787826  546345 cri.go:89] found id: ""
	I1202 22:29:20.787852  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.787869  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:20.787876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:20.787939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:20.811402  546345 cri.go:89] found id: ""
	I1202 22:29:20.811427  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.811435  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:20.811441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:20.811500  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:20.835289  546345 cri.go:89] found id: ""
	I1202 22:29:20.835314  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.835322  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:20.835329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:20.835404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:20.858522  546345 cri.go:89] found id: ""
	I1202 22:29:20.858548  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.858556  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:20.858563  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:20.858622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:20.883759  546345 cri.go:89] found id: ""
	I1202 22:29:20.883783  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.883791  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:20.883798  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:20.883857  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:20.907968  546345 cri.go:89] found id: ""
	I1202 22:29:20.907992  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.908001  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:20.908010  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:20.908020  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:20.962992  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:20.963028  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:20.978472  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:20.978499  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:21.039749  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:21.039771  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:21.039784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:21.064157  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:21.064194  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:23.595745  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:23.606920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:23.606996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:23.633420  546345 cri.go:89] found id: ""
	I1202 22:29:23.633450  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.633459  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:23.633473  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:23.633532  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:23.659559  546345 cri.go:89] found id: ""
	I1202 22:29:23.659581  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.659590  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:23.659596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:23.659663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:23.684986  546345 cri.go:89] found id: ""
	I1202 22:29:23.685010  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.685031  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:23.685039  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:23.685099  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:23.709487  546345 cri.go:89] found id: ""
	I1202 22:29:23.709560  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.709583  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:23.709604  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:23.709734  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:23.734133  546345 cri.go:89] found id: ""
	I1202 22:29:23.734159  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.734167  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:23.734173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:23.734233  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:23.758126  546345 cri.go:89] found id: ""
	I1202 22:29:23.758190  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.758213  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:23.758234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:23.758327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:23.782448  546345 cri.go:89] found id: ""
	I1202 22:29:23.782471  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.782480  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:23.782505  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:23.782579  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:23.806736  546345 cri.go:89] found id: ""
	I1202 22:29:23.806761  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.806770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:23.806780  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:23.806790  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:23.865578  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:23.865619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:23.881434  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:23.881470  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:23.944584  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:23.944606  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:23.944619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:23.970159  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:23.970207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.498138  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:26.508783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:26.508852  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:26.537015  546345 cri.go:89] found id: ""
	I1202 22:29:26.537037  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.537046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:26.537053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:26.537110  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:26.574312  546345 cri.go:89] found id: ""
	I1202 22:29:26.574339  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.574347  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:26.574354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:26.574411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:26.629052  546345 cri.go:89] found id: ""
	I1202 22:29:26.629079  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.629087  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:26.629094  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:26.629150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:26.658217  546345 cri.go:89] found id: ""
	I1202 22:29:26.658251  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.658259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:26.658266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:26.658337  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:26.681717  546345 cri.go:89] found id: ""
	I1202 22:29:26.681751  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.681760  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:26.681778  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:26.681850  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:26.704611  546345 cri.go:89] found id: ""
	I1202 22:29:26.704646  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.704655  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:26.704661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:26.704733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:26.728028  546345 cri.go:89] found id: ""
	I1202 22:29:26.728091  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.728115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:26.728137  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:26.728223  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:26.755557  546345 cri.go:89] found id: ""
	I1202 22:29:26.755582  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.755590  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:26.755600  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:26.755611  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.786053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:26.786080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:26.841068  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:26.841100  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:26.856799  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:26.856829  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:26.924274  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:26.924338  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:26.924358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.449918  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:29.460186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:29.460259  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:29.483893  546345 cri.go:89] found id: ""
	I1202 22:29:29.483915  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.483924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:29.483930  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:29.483990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:29.507973  546345 cri.go:89] found id: ""
	I1202 22:29:29.507999  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.508007  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:29.508013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:29.508073  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:29.532020  546345 cri.go:89] found id: ""
	I1202 22:29:29.532045  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.532054  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:29.532061  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:29.532119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:29.583563  546345 cri.go:89] found id: ""
	I1202 22:29:29.583590  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.583599  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:29.583606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:29.583664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:29.626796  546345 cri.go:89] found id: ""
	I1202 22:29:29.626821  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.626830  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:29.626837  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:29.626910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:29.650151  546345 cri.go:89] found id: ""
	I1202 22:29:29.650179  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.650186  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:29.650193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:29.650254  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:29.677989  546345 cri.go:89] found id: ""
	I1202 22:29:29.678015  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.678023  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:29.678031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:29.678090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:29.707431  546345 cri.go:89] found id: ""
	I1202 22:29:29.707457  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.707465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:29.707475  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:29.707486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:29.773447  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:29.773470  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:29.773484  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.798530  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:29.798604  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:29.825490  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:29.825517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:29.884423  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:29.884461  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:32.401788  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:32.413697  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:32.413768  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:32.447463  546345 cri.go:89] found id: ""
	I1202 22:29:32.447486  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.447494  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:32.447501  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:32.447560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:32.480451  546345 cri.go:89] found id: ""
	I1202 22:29:32.480473  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.480481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:32.480487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:32.480543  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:32.518559  546345 cri.go:89] found id: ""
	I1202 22:29:32.518581  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.518590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:32.518596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:32.518652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:32.570716  546345 cri.go:89] found id: ""
	I1202 22:29:32.570737  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.570746  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:32.570752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:32.570809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:32.612686  546345 cri.go:89] found id: ""
	I1202 22:29:32.612722  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.612731  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:32.612738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:32.612797  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:32.651570  546345 cri.go:89] found id: ""
	I1202 22:29:32.651592  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.651600  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:32.651607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:32.651671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:32.679451  546345 cri.go:89] found id: ""
	I1202 22:29:32.679475  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.679484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:32.679490  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:32.679552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:32.705124  546345 cri.go:89] found id: ""
	I1202 22:29:32.705149  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.705170  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:32.705180  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:32.705193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:32.772557  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:32.772578  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:32.772590  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:32.798210  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:32.798246  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:32.826270  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:32.826298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:32.885460  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:32.885496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:35.401743  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:35.412979  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:35.413051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:35.438650  546345 cri.go:89] found id: ""
	I1202 22:29:35.438684  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.438703  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:35.438710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:35.438787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:35.467326  546345 cri.go:89] found id: ""
	I1202 22:29:35.467350  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.467358  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:35.467365  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:35.467444  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:35.492513  546345 cri.go:89] found id: ""
	I1202 22:29:35.492546  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.492554  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:35.492561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:35.492659  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:35.517758  546345 cri.go:89] found id: ""
	I1202 22:29:35.517785  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.517794  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:35.517801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:35.517861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:35.564303  546345 cri.go:89] found id: ""
	I1202 22:29:35.564329  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.564338  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:35.564345  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:35.564431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:35.610173  546345 cri.go:89] found id: ""
	I1202 22:29:35.610253  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.610289  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:35.610311  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:35.610412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:35.647481  546345 cri.go:89] found id: ""
	I1202 22:29:35.647545  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.647560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:35.647567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:35.647628  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:35.671535  546345 cri.go:89] found id: ""
	I1202 22:29:35.671561  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.671569  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:35.671579  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:35.671591  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:35.736069  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:35.736092  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:35.736106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:35.760759  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:35.760794  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:35.786652  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:35.786678  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:35.842999  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:35.843035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.358963  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:38.369060  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:38.369123  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:38.400304  546345 cri.go:89] found id: ""
	I1202 22:29:38.400330  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.400339  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:38.400351  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:38.400407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:38.424847  546345 cri.go:89] found id: ""
	I1202 22:29:38.424873  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.424881  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:38.424888  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:38.424946  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:38.452445  546345 cri.go:89] found id: ""
	I1202 22:29:38.452472  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.452481  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:38.452487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:38.452544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:38.480761  546345 cri.go:89] found id: ""
	I1202 22:29:38.480783  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.480804  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:38.480811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:38.480870  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:38.505019  546345 cri.go:89] found id: ""
	I1202 22:29:38.505044  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.505052  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:38.505059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:38.505116  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:38.528009  546345 cri.go:89] found id: ""
	I1202 22:29:38.528036  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.528045  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:38.528052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:38.528109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:38.595576  546345 cri.go:89] found id: ""
	I1202 22:29:38.595598  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.595606  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:38.595613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:38.595671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:38.638153  546345 cri.go:89] found id: ""
	I1202 22:29:38.638177  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.638186  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:38.638195  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:38.638206  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.653639  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:38.653696  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:38.715223  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:38.715245  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:38.715258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:38.739162  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:38.739196  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:38.766317  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:38.766345  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.321520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:41.331550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:41.331636  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:41.355934  546345 cri.go:89] found id: ""
	I1202 22:29:41.355959  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.355968  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:41.355975  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:41.356035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:41.381232  546345 cri.go:89] found id: ""
	I1202 22:29:41.381254  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.381263  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:41.381269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:41.381325  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:41.406147  546345 cri.go:89] found id: ""
	I1202 22:29:41.406171  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.406179  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:41.406186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:41.406246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:41.435516  546345 cri.go:89] found id: ""
	I1202 22:29:41.435542  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.435551  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:41.435559  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:41.435619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:41.460909  546345 cri.go:89] found id: ""
	I1202 22:29:41.460932  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.460941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:41.460948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:41.461035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:41.487520  546345 cri.go:89] found id: ""
	I1202 22:29:41.487553  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.487570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:41.487577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:41.487648  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:41.512354  546345 cri.go:89] found id: ""
	I1202 22:29:41.512425  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.512449  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:41.512469  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:41.512552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:41.536885  546345 cri.go:89] found id: ""
	I1202 22:29:41.536908  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.536917  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:41.536927  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:41.536938  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.607465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:41.607514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:41.635996  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:41.636025  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:41.712077  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:41.712100  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:41.712113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:41.736613  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:41.736660  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.265095  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:44.276615  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:44.276703  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:44.303304  546345 cri.go:89] found id: ""
	I1202 22:29:44.303325  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.303334  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:44.303340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:44.303403  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:44.333144  546345 cri.go:89] found id: ""
	I1202 22:29:44.333167  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.333176  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:44.333182  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:44.333258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:44.359646  546345 cri.go:89] found id: ""
	I1202 22:29:44.359675  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.359684  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:44.359691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:44.359751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:44.384230  546345 cri.go:89] found id: ""
	I1202 22:29:44.384255  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.384264  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:44.384270  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:44.384342  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:44.409648  546345 cri.go:89] found id: ""
	I1202 22:29:44.409701  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.409711  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:44.409718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:44.409776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:44.434410  546345 cri.go:89] found id: ""
	I1202 22:29:44.434437  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.434446  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:44.434452  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:44.434512  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:44.458352  546345 cri.go:89] found id: ""
	I1202 22:29:44.458376  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.458385  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:44.458392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:44.458465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:44.486353  546345 cri.go:89] found id: ""
	I1202 22:29:44.486385  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.486396  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:44.486420  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:44.486436  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:44.510698  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:44.510737  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.552264  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:44.552293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:44.660418  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:44.660451  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:44.676162  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:44.676230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:44.741313  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.241695  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:47.253909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:47.253977  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:47.280127  546345 cri.go:89] found id: ""
	I1202 22:29:47.280151  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.280159  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:47.280166  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:47.280227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:47.309688  546345 cri.go:89] found id: ""
	I1202 22:29:47.309711  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.309719  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:47.309726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:47.309795  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:47.334233  546345 cri.go:89] found id: ""
	I1202 22:29:47.334259  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.334268  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:47.334275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:47.334330  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:47.363203  546345 cri.go:89] found id: ""
	I1202 22:29:47.363228  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.363237  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:47.363245  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:47.363314  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:47.390073  546345 cri.go:89] found id: ""
	I1202 22:29:47.390096  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.390104  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:47.390111  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:47.390168  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:47.416413  546345 cri.go:89] found id: ""
	I1202 22:29:47.416435  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.416444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:47.416451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:47.416518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:47.440718  546345 cri.go:89] found id: ""
	I1202 22:29:47.440743  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.440753  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:47.440759  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:47.440818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:47.463875  546345 cri.go:89] found id: ""
	I1202 22:29:47.463901  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.463910  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:47.463920  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:47.463931  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:47.492814  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:47.492842  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:47.558225  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:47.558264  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:47.574145  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:47.574174  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:47.666298  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.666357  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:47.666385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.191511  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:50.202178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:50.202258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:50.229173  546345 cri.go:89] found id: ""
	I1202 22:29:50.229213  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.229222  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:50.229228  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:50.229293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:50.253932  546345 cri.go:89] found id: ""
	I1202 22:29:50.253962  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.253971  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:50.253977  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:50.254033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:50.278257  546345 cri.go:89] found id: ""
	I1202 22:29:50.278280  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.278289  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:50.278296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:50.278351  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:50.306884  546345 cri.go:89] found id: ""
	I1202 22:29:50.306907  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.306914  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:50.306921  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:50.306989  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:50.331454  546345 cri.go:89] found id: ""
	I1202 22:29:50.331528  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.331553  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:50.331566  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:50.331658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:50.355157  546345 cri.go:89] found id: ""
	I1202 22:29:50.355230  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.355254  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:50.355268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:50.355346  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:50.380390  546345 cri.go:89] found id: ""
	I1202 22:29:50.380415  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.380424  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:50.380430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:50.380518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:50.408708  546345 cri.go:89] found id: ""
	I1202 22:29:50.408733  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.408742  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:50.408751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:50.408800  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:50.466607  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:50.466641  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:50.482087  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:50.482154  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:50.548310  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:50.548334  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:50.548347  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.581455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:50.581492  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:51.497099  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:51.556470  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:51.556588  546345 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:53.133025  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:53.143115  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:53.143180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:53.166147  546345 cri.go:89] found id: ""
	I1202 22:29:53.166169  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.166177  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:53.166183  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:53.166251  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:53.191215  546345 cri.go:89] found id: ""
	I1202 22:29:53.191238  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.191247  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:53.191253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:53.191329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:53.214527  546345 cri.go:89] found id: ""
	I1202 22:29:53.214593  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.214616  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:53.214631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:53.214701  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:53.239062  546345 cri.go:89] found id: ""
	I1202 22:29:53.239089  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.239098  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:53.239105  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:53.239270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:53.269346  546345 cri.go:89] found id: ""
	I1202 22:29:53.269416  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.269440  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:53.269462  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:53.269571  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:53.293728  546345 cri.go:89] found id: ""
	I1202 22:29:53.293802  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.293825  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:53.293845  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:53.293942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:53.322079  546345 cri.go:89] found id: ""
	I1202 22:29:53.322106  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.322115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:53.322121  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:53.322180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:53.345988  546345 cri.go:89] found id: ""
	I1202 22:29:53.346055  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.346079  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:53.346103  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:53.346128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:53.402872  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:53.402909  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:53.418121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:53.418150  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:53.480652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:53.480725  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:53.480756  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:53.505378  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:53.505414  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.037255  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:56.048340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:56.048412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:56.080851  546345 cri.go:89] found id: ""
	I1202 22:29:56.080878  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.080888  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:56.080894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:56.080963  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:56.105446  546345 cri.go:89] found id: ""
	I1202 22:29:56.105472  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.105481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:56.105488  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:56.105545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:56.131318  546345 cri.go:89] found id: ""
	I1202 22:29:56.131344  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.131352  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:56.131358  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:56.131414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:56.159096  546345 cri.go:89] found id: ""
	I1202 22:29:56.159118  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.159126  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:56.159132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:56.159191  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:56.183173  546345 cri.go:89] found id: ""
	I1202 22:29:56.183199  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.183207  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:56.183214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:56.183279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:56.207984  546345 cri.go:89] found id: ""
	I1202 22:29:56.208017  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.208029  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:56.208035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:56.208095  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:56.232594  546345 cri.go:89] found id: ""
	I1202 22:29:56.232617  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.232625  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:56.232632  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:56.232699  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:56.257221  546345 cri.go:89] found id: ""
	I1202 22:29:56.257247  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.257256  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:56.257265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:56.257278  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.283035  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:56.283061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:56.339962  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:56.339997  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:56.355699  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:56.355773  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:56.414625  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:56.414693  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:56.414738  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:56.879279  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:56.938440  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:56.938561  546345 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:58.938802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:58.951366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:58.951487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:58.978893  546345 cri.go:89] found id: ""
	I1202 22:29:58.978916  546345 logs.go:282] 0 containers: []
	W1202 22:29:58.978924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:58.978931  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:58.978990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:59.005270  546345 cri.go:89] found id: ""
	I1202 22:29:59.005299  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.005309  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:59.005316  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:59.005396  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:59.029424  546345 cri.go:89] found id: ""
	I1202 22:29:59.029453  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.029461  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:59.029468  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:59.029525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:59.053363  546345 cri.go:89] found id: ""
	I1202 22:29:59.053398  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.053407  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:59.053414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:59.053481  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:59.078974  546345 cri.go:89] found id: ""
	I1202 22:29:59.079051  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.079073  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:59.079088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:59.079162  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:59.103336  546345 cri.go:89] found id: ""
	I1202 22:29:59.103358  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.103366  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:59.103383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:59.103441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:59.127855  546345 cri.go:89] found id: ""
	I1202 22:29:59.127929  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.127952  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:59.127972  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:59.128077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:59.151167  546345 cri.go:89] found id: ""
	I1202 22:29:59.151196  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.151204  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:59.151213  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:59.151224  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:59.208516  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:59.208559  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:59.224755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:59.224780  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:59.286748  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:59.286772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:59.286787  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:59.311855  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:59.311889  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:01.391459  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:30:01.475431  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:30:01.475652  546345 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:30:01.478828  546345 out.go:179] * Enabled addons: 
	I1202 22:30:01.482057  546345 addons.go:530] duration metric: took 1m44.065625472s for enable addons: enabled=[]
	I1202 22:30:01.843006  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:01.854584  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:01.854684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:01.885465  546345 cri.go:89] found id: ""
	I1202 22:30:01.885501  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.885510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:01.885517  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:01.885587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:01.917316  546345 cri.go:89] found id: ""
	I1202 22:30:01.917348  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.917359  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:01.917366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:01.917463  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:01.943052  546345 cri.go:89] found id: ""
	I1202 22:30:01.943078  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.943086  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:01.943093  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:01.943153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:01.969294  546345 cri.go:89] found id: ""
	I1202 22:30:01.969321  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.969330  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:01.969339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:01.969402  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:01.996336  546345 cri.go:89] found id: ""
	I1202 22:30:01.996405  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.996428  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:01.996449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:01.996537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:02.025075  546345 cri.go:89] found id: ""
	I1202 22:30:02.025158  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.025183  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:02.025203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:02.025300  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:02.078384  546345 cri.go:89] found id: ""
	I1202 22:30:02.078450  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.078474  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:02.078493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:02.078585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:02.124922  546345 cri.go:89] found id: ""
	I1202 22:30:02.125001  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.125021  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:02.125031  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:02.125044  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:02.197595  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:02.197618  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:02.197634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:02.223170  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:02.223203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:02.255281  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:02.255348  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:02.310654  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:02.310690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:04.828623  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:04.839157  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:04.839282  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:04.863863  546345 cri.go:89] found id: ""
	I1202 22:30:04.863887  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.863896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:04.863903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:04.863996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:04.890006  546345 cri.go:89] found id: ""
	I1202 22:30:04.890031  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.890040  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:04.890047  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:04.890146  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:04.915998  546345 cri.go:89] found id: ""
	I1202 22:30:04.916021  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.916035  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:04.916042  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:04.916100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:04.940395  546345 cri.go:89] found id: ""
	I1202 22:30:04.940420  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.940429  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:04.940435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:04.940495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:04.964621  546345 cri.go:89] found id: ""
	I1202 22:30:04.964650  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.964660  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:04.964667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:04.964737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:04.989632  546345 cri.go:89] found id: ""
	I1202 22:30:04.989685  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.989694  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:04.989702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:04.989760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:05.019501  546345 cri.go:89] found id: ""
	I1202 22:30:05.019528  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.019537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:05.019545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:05.019610  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:05.049637  546345 cri.go:89] found id: ""
	I1202 22:30:05.049682  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.049690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:05.049700  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:05.049711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:05.088244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:05.088281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:05.133381  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:05.133409  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:05.194841  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:05.194874  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:05.210533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:05.210560  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:05.273348  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:07.774501  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:07.784828  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:07.784927  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:07.814568  546345 cri.go:89] found id: ""
	I1202 22:30:07.814610  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.814619  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:07.814627  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:07.814711  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:07.839281  546345 cri.go:89] found id: ""
	I1202 22:30:07.839306  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.839325  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:07.839333  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:07.839410  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:07.863734  546345 cri.go:89] found id: ""
	I1202 22:30:07.863756  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.863764  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:07.863771  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:07.863830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:07.887517  546345 cri.go:89] found id: ""
	I1202 22:30:07.887541  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.887549  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:07.887556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:07.887615  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:07.912577  546345 cri.go:89] found id: ""
	I1202 22:30:07.912599  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.912608  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:07.912614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:07.912684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:07.937037  546345 cri.go:89] found id: ""
	I1202 22:30:07.937062  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.937071  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:07.937088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:07.937153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:07.961873  546345 cri.go:89] found id: ""
	I1202 22:30:07.961901  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.961910  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:07.961916  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:07.961974  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:07.985864  546345 cri.go:89] found id: ""
	I1202 22:30:07.985890  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.985906  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:07.985917  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:07.985928  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:08.011244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:08.011284  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:08.055290  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:08.055321  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:08.134015  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:08.134069  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:08.154013  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:08.154041  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:08.223778  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:10.723964  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:10.736098  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:10.736214  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:10.761205  546345 cri.go:89] found id: ""
	I1202 22:30:10.761227  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.761236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:10.761243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:10.761303  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:10.785829  546345 cri.go:89] found id: ""
	I1202 22:30:10.785856  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.785865  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:10.785872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:10.785931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:10.815724  546345 cri.go:89] found id: ""
	I1202 22:30:10.815748  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.815757  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:10.815767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:10.815844  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:10.840563  546345 cri.go:89] found id: ""
	I1202 22:30:10.840586  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.840594  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:10.840601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:10.840667  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:10.869275  546345 cri.go:89] found id: ""
	I1202 22:30:10.869349  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.869372  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:10.869391  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:10.869478  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:10.894450  546345 cri.go:89] found id: ""
	I1202 22:30:10.894477  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.894486  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:10.894493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:10.894572  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:10.919134  546345 cri.go:89] found id: ""
	I1202 22:30:10.919161  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.919170  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:10.919177  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:10.919238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:10.944009  546345 cri.go:89] found id: ""
	I1202 22:30:10.944035  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.944044  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:10.944053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:10.944066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:11.000144  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:11.000183  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:11.018501  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:11.018532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:11.149770  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:11.149837  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:11.149860  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:11.175018  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:11.175055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.702967  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:13.713482  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:13.713560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:13.739844  546345 cri.go:89] found id: ""
	I1202 22:30:13.739867  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.739876  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:13.739886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:13.739943  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:13.765162  546345 cri.go:89] found id: ""
	I1202 22:30:13.765184  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.765192  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:13.765199  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:13.765256  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:13.790968  546345 cri.go:89] found id: ""
	I1202 22:30:13.790991  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.790999  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:13.791005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:13.791069  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:13.816755  546345 cri.go:89] found id: ""
	I1202 22:30:13.816791  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.816799  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:13.816806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:13.816869  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:13.843444  546345 cri.go:89] found id: ""
	I1202 22:30:13.843469  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.843477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:13.843484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:13.843551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:13.868489  546345 cri.go:89] found id: ""
	I1202 22:30:13.868514  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.868523  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:13.868530  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:13.868608  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:13.893527  546345 cri.go:89] found id: ""
	I1202 22:30:13.893552  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.893560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:13.893567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:13.893624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:13.919358  546345 cri.go:89] found id: ""
	I1202 22:30:13.919382  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.919390  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:13.919400  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:13.919411  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.946818  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:13.946846  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:14.004198  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:14.004294  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:14.021120  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:14.021157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:14.145347  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:14.145369  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:14.145382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:16.669687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:16.680323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:16.680426  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:16.705295  546345 cri.go:89] found id: ""
	I1202 22:30:16.705320  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.705329  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:16.705335  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:16.705394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:16.729538  546345 cri.go:89] found id: ""
	I1202 22:30:16.729633  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.729648  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:16.729682  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:16.729766  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:16.754022  546345 cri.go:89] found id: ""
	I1202 22:30:16.754045  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.754053  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:16.754059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:16.754119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:16.780138  546345 cri.go:89] found id: ""
	I1202 22:30:16.780163  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.780171  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:16.780178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:16.780237  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:16.805096  546345 cri.go:89] found id: ""
	I1202 22:30:16.805123  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.805134  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:16.805141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:16.805201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:16.830436  546345 cri.go:89] found id: ""
	I1202 22:30:16.830461  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.830470  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:16.830477  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:16.830537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:16.859101  546345 cri.go:89] found id: ""
	I1202 22:30:16.859126  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.859135  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:16.859142  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:16.859201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:16.884001  546345 cri.go:89] found id: ""
	I1202 22:30:16.884025  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.884033  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:16.884043  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:16.884054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:16.919216  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:16.919242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:16.974540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:16.974574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:16.990333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:16.990361  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:17.096330  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:17.096351  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:17.096363  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:19.641119  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:19.651302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:19.651372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:19.675892  546345 cri.go:89] found id: ""
	I1202 22:30:19.675920  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.675929  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:19.675935  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:19.675993  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:19.700442  546345 cri.go:89] found id: ""
	I1202 22:30:19.700472  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.700480  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:19.700487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:19.700545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:19.724905  546345 cri.go:89] found id: ""
	I1202 22:30:19.724933  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.724941  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:19.724948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:19.725008  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:19.749042  546345 cri.go:89] found id: ""
	I1202 22:30:19.749064  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.749072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:19.749079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:19.749142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:19.772319  546345 cri.go:89] found id: ""
	I1202 22:30:19.772346  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.772354  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:19.772361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:19.772423  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:19.796590  546345 cri.go:89] found id: ""
	I1202 22:30:19.796661  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.796685  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:19.796706  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:19.796791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:19.820897  546345 cri.go:89] found id: ""
	I1202 22:30:19.820971  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.820994  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:19.821013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:19.821097  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:19.845057  546345 cri.go:89] found id: ""
	I1202 22:30:19.845127  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.845151  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:19.845173  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:19.845210  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:19.901157  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:19.901190  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:19.916681  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:19.916709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:19.978835  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:19.978855  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:19.978868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:20.003532  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:20.003576  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:22.540194  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:22.550669  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:22.550752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:22.575137  546345 cri.go:89] found id: ""
	I1202 22:30:22.575162  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.575179  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:22.575186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:22.575246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:22.600172  546345 cri.go:89] found id: ""
	I1202 22:30:22.600199  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.600208  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:22.600214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:22.600280  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:22.627626  546345 cri.go:89] found id: ""
	I1202 22:30:22.627652  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.627661  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:22.627667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:22.627727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:22.652380  546345 cri.go:89] found id: ""
	I1202 22:30:22.652407  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.652416  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:22.652422  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:22.652483  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:22.679899  546345 cri.go:89] found id: ""
	I1202 22:30:22.679924  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.679933  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:22.679939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:22.679999  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:22.704508  546345 cri.go:89] found id: ""
	I1202 22:30:22.704533  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.704542  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:22.704548  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:22.704623  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:22.729344  546345 cri.go:89] found id: ""
	I1202 22:30:22.729372  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.729380  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:22.729387  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:22.729451  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:22.753872  546345 cri.go:89] found id: ""
	I1202 22:30:22.753899  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.753908  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:22.753918  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:22.753929  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:22.810619  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:22.810654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:22.826861  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:22.826887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:22.891768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:22.891788  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:22.891801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:22.915527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:22.915563  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.443424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:25.454070  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:25.454140  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:25.477867  546345 cri.go:89] found id: ""
	I1202 22:30:25.477888  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.477896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:25.477902  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:25.477961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:25.503405  546345 cri.go:89] found id: ""
	I1202 22:30:25.503440  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.503449  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:25.503456  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:25.503548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:25.528678  546345 cri.go:89] found id: ""
	I1202 22:30:25.528703  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.528711  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:25.528718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:25.528784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:25.555479  546345 cri.go:89] found id: ""
	I1202 22:30:25.555505  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.555513  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:25.555520  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:25.555587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:25.588375  546345 cri.go:89] found id: ""
	I1202 22:30:25.588398  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.588408  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:25.588415  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:25.588475  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:25.613403  546345 cri.go:89] found id: ""
	I1202 22:30:25.613488  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.613511  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:25.613532  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:25.613627  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:25.644249  546345 cri.go:89] found id: ""
	I1202 22:30:25.644273  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.644282  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:25.644289  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:25.644348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:25.669360  546345 cri.go:89] found id: ""
	I1202 22:30:25.669385  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.669394  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:25.669432  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:25.669448  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.701067  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:25.701095  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:25.755359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:25.755393  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:25.771118  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:25.771147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:25.830809  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:25.830832  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:25.830845  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.355998  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:28.366515  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:28.366588  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:28.391593  546345 cri.go:89] found id: ""
	I1202 22:30:28.391618  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.391627  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:28.391634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:28.391694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:28.420025  546345 cri.go:89] found id: ""
	I1202 22:30:28.420051  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.420060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:28.420073  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:28.420137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:28.444623  546345 cri.go:89] found id: ""
	I1202 22:30:28.444647  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.444655  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:28.444662  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:28.444726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:28.469992  546345 cri.go:89] found id: ""
	I1202 22:30:28.470015  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.470024  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:28.470030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:28.470089  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:28.495503  546345 cri.go:89] found id: ""
	I1202 22:30:28.495580  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.495602  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:28.495616  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:28.495687  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:28.520105  546345 cri.go:89] found id: ""
	I1202 22:30:28.520130  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.520139  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:28.520145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:28.520207  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:28.547412  546345 cri.go:89] found id: ""
	I1202 22:30:28.547444  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.547454  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:28.547460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:28.547522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:28.572324  546345 cri.go:89] found id: ""
	I1202 22:30:28.572349  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.572358  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:28.572367  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:28.572379  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:28.587929  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:28.587952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:28.651756  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:28.651790  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:28.651803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.676386  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:28.676421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:28.708051  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:28.708079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.265370  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:31.275659  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:31.275728  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:31.335888  546345 cri.go:89] found id: ""
	I1202 22:30:31.335928  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.335956  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:31.335970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:31.336049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:31.386854  546345 cri.go:89] found id: ""
	I1202 22:30:31.386880  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.386888  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:31.386895  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:31.386979  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:31.410707  546345 cri.go:89] found id: ""
	I1202 22:30:31.410731  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.410739  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:31.410746  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:31.410804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:31.439172  546345 cri.go:89] found id: ""
	I1202 22:30:31.439239  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.439263  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:31.439276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:31.439355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:31.467199  546345 cri.go:89] found id: ""
	I1202 22:30:31.467277  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.467293  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:31.467301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:31.467390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:31.495081  546345 cri.go:89] found id: ""
	I1202 22:30:31.495155  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.495178  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:31.495193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:31.495270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:31.518280  546345 cri.go:89] found id: ""
	I1202 22:30:31.518306  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.518315  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:31.518323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:31.518400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:31.543715  546345 cri.go:89] found id: ""
	I1202 22:30:31.543757  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.543793  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:31.543809  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:31.543821  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.601359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:31.601392  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:31.617291  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:31.617323  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:31.682689  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:31.682713  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:31.682727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:31.706626  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:31.706661  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:34.235905  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:34.246438  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:34.246560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:34.271279  546345 cri.go:89] found id: ""
	I1202 22:30:34.271350  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.271365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:34.271374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:34.271434  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:34.303460  546345 cri.go:89] found id: ""
	I1202 22:30:34.303498  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.303507  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:34.303513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:34.303635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:34.355759  546345 cri.go:89] found id: ""
	I1202 22:30:34.355786  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.355795  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:34.355801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:34.355908  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:34.402466  546345 cri.go:89] found id: ""
	I1202 22:30:34.402553  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.402572  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:34.402580  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:34.402654  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:34.431909  546345 cri.go:89] found id: ""
	I1202 22:30:34.431932  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.431941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:34.431947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:34.432004  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:34.455451  546345 cri.go:89] found id: ""
	I1202 22:30:34.455476  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.455484  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:34.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:34.455632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:34.478771  546345 cri.go:89] found id: ""
	I1202 22:30:34.478797  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.478805  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:34.478812  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:34.478904  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:34.502377  546345 cri.go:89] found id: ""
	I1202 22:30:34.502452  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.502468  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:34.502479  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:34.502490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:34.559881  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:34.559925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:34.576755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:34.576785  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:34.640203  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:34.640223  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:34.640236  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:34.664331  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:34.664368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:37.198596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:37.208910  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:37.208981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:37.233321  546345 cri.go:89] found id: ""
	I1202 22:30:37.233346  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.233354  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:37.233361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:37.233419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:37.259307  546345 cri.go:89] found id: ""
	I1202 22:30:37.259331  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.259340  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:37.259346  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:37.259404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:37.282333  546345 cri.go:89] found id: ""
	I1202 22:30:37.282358  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.282367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:37.282373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:37.282430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:37.351993  546345 cri.go:89] found id: ""
	I1202 22:30:37.352018  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.352027  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:37.352034  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:37.352124  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:37.398805  546345 cri.go:89] found id: ""
	I1202 22:30:37.398829  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.398840  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:37.398847  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:37.398912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:37.422987  546345 cri.go:89] found id: ""
	I1202 22:30:37.423010  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.423019  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:37.423026  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:37.423100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:37.447502  546345 cri.go:89] found id: ""
	I1202 22:30:37.447528  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.447537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:37.447544  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:37.447630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:37.471899  546345 cri.go:89] found id: ""
	I1202 22:30:37.471934  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.471943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:37.471952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:37.471963  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:37.528313  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:37.528350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:37.544433  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:37.544464  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:37.611970  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:37.611994  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:37.612007  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:37.636937  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:37.636971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:40.165587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:40.177235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:40.177323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:40.205543  546345 cri.go:89] found id: ""
	I1202 22:30:40.205568  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.205576  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:40.205583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:40.205644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:40.232642  546345 cri.go:89] found id: ""
	I1202 22:30:40.232668  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.232677  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:40.232684  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:40.232746  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:40.259447  546345 cri.go:89] found id: ""
	I1202 22:30:40.259482  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.259496  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:40.259503  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:40.259591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:40.297166  546345 cri.go:89] found id: ""
	I1202 22:30:40.297190  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.297198  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:40.297205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:40.297268  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:40.337983  546345 cri.go:89] found id: ""
	I1202 22:30:40.338005  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.338014  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:40.338020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:40.338079  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:40.380237  546345 cri.go:89] found id: ""
	I1202 22:30:40.380266  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.380274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:40.380282  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:40.380343  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:40.412498  546345 cri.go:89] found id: ""
	I1202 22:30:40.412563  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.412572  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:40.412579  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:40.412637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:40.441910  546345 cri.go:89] found id: ""
	I1202 22:30:40.441934  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.441943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:40.441952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:40.441969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:40.496209  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:40.496245  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:40.512922  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:40.512953  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:40.580850  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:40.580875  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:40.580887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:40.605967  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:40.606001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:43.139166  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:43.149443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:43.149516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:43.177064  546345 cri.go:89] found id: ""
	I1202 22:30:43.177091  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.177099  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:43.177106  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:43.177164  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:43.201811  546345 cri.go:89] found id: ""
	I1202 22:30:43.201837  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.201845  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:43.201852  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:43.201912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:43.225492  546345 cri.go:89] found id: ""
	I1202 22:30:43.225520  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.225529  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:43.225536  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:43.225594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:43.249036  546345 cri.go:89] found id: ""
	I1202 22:30:43.249064  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.249072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:43.249079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:43.249139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:43.277251  546345 cri.go:89] found id: ""
	I1202 22:30:43.277276  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.277285  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:43.277297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:43.277354  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:43.314360  546345 cri.go:89] found id: ""
	I1202 22:30:43.314396  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.314406  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:43.314413  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:43.314488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:43.374629  546345 cri.go:89] found id: ""
	I1202 22:30:43.374657  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.374666  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:43.374672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:43.374730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:43.416766  546345 cri.go:89] found id: ""
	I1202 22:30:43.416794  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.416803  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:43.416812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:43.416823  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:43.471606  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:43.471644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:43.487334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:43.487362  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:43.553915  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:43.553939  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:43.553952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:43.579222  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:43.579258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.107248  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:46.118081  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:46.118150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:46.142754  546345 cri.go:89] found id: ""
	I1202 22:30:46.142781  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.142789  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:46.142796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:46.142861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:46.169825  546345 cri.go:89] found id: ""
	I1202 22:30:46.169849  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.169858  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:46.169864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:46.169929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:46.196691  546345 cri.go:89] found id: ""
	I1202 22:30:46.196719  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.196728  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:46.196734  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:46.196796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:46.221449  546345 cri.go:89] found id: ""
	I1202 22:30:46.221476  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.221485  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:46.221492  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:46.221552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:46.246043  546345 cri.go:89] found id: ""
	I1202 22:30:46.246108  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.246131  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:46.246145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:46.246227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:46.271663  546345 cri.go:89] found id: ""
	I1202 22:30:46.271687  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.271695  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:46.271702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:46.271760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:46.315379  546345 cri.go:89] found id: ""
	I1202 22:30:46.315404  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.315413  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:46.315420  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:46.315477  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:46.359855  546345 cri.go:89] found id: ""
	I1202 22:30:46.359883  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.359893  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:46.359903  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:46.359915  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:46.377127  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:46.377158  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:46.445559  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:46.445583  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:46.445605  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:46.473713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:46.473754  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.501189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:46.501221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.058128  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:49.068126  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:49.068198  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:49.092263  546345 cri.go:89] found id: ""
	I1202 22:30:49.092288  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.092297  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:49.092303  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:49.092360  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:49.115983  546345 cri.go:89] found id: ""
	I1202 22:30:49.116008  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.116017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:49.116024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:49.116081  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:49.139874  546345 cri.go:89] found id: ""
	I1202 22:30:49.139899  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.139908  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:49.139915  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:49.139971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:49.164359  546345 cri.go:89] found id: ""
	I1202 22:30:49.164388  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.164397  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:49.164404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:49.164485  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:49.189339  546345 cri.go:89] found id: ""
	I1202 22:30:49.189365  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.189374  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:49.189383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:49.189440  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:49.213800  546345 cri.go:89] found id: ""
	I1202 22:30:49.213826  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.213835  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:49.213842  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:49.213899  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:49.238436  546345 cri.go:89] found id: ""
	I1202 22:30:49.238463  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.238473  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:49.238480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:49.238540  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:49.267385  546345 cri.go:89] found id: ""
	I1202 22:30:49.267459  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.267483  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:49.267500  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:49.267523  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.332624  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:49.332664  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:49.365875  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:49.365902  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:49.443796  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:49.443869  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:49.443888  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:49.467900  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:49.467933  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:51.996457  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:52.009596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:52.009694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:52.037146  546345 cri.go:89] found id: ""
	I1202 22:30:52.037172  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.037190  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:52.037197  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:52.037257  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:52.063683  546345 cri.go:89] found id: ""
	I1202 22:30:52.063708  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.063717  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:52.063724  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:52.063786  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:52.089573  546345 cri.go:89] found id: ""
	I1202 22:30:52.089598  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.089606  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:52.089613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:52.089704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:52.114785  546345 cri.go:89] found id: ""
	I1202 22:30:52.114810  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.114819  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:52.114826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:52.114884  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:52.137456  546345 cri.go:89] found id: ""
	I1202 22:30:52.137479  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.137489  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:52.137495  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:52.137552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:52.161392  546345 cri.go:89] found id: ""
	I1202 22:30:52.161418  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.161426  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:52.161433  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:52.161544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:52.186619  546345 cri.go:89] found id: ""
	I1202 22:30:52.186640  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.186648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:52.186658  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:52.186717  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:52.211047  546345 cri.go:89] found id: ""
	I1202 22:30:52.211069  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.211077  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:52.211086  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:52.211097  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:52.240049  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:52.240079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:52.297727  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:52.297804  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:52.326988  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:52.327061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:52.421545  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:52.421566  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:52.421578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:54.945402  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:54.955618  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:54.955688  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:54.981106  546345 cri.go:89] found id: ""
	I1202 22:30:54.981132  546345 logs.go:282] 0 containers: []
	W1202 22:30:54.981140  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:54.981147  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:54.981210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:55.017766  546345 cri.go:89] found id: ""
	I1202 22:30:55.017789  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.017798  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:55.017805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:55.017886  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:55.051218  546345 cri.go:89] found id: ""
	I1202 22:30:55.051293  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.051320  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:55.051342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:55.051449  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:55.092842  546345 cri.go:89] found id: ""
	I1202 22:30:55.092869  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.092879  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:55.092886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:55.092955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:55.131432  546345 cri.go:89] found id: ""
	I1202 22:30:55.131517  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.131546  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:55.131570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:55.131702  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:55.166611  546345 cri.go:89] found id: ""
	I1202 22:30:55.166639  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.166653  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:55.166661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:55.166737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:55.197157  546345 cri.go:89] found id: ""
	I1202 22:30:55.197183  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.197199  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:55.197206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:55.197277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:55.229014  546345 cri.go:89] found id: ""
	I1202 22:30:55.229045  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.229053  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:55.229062  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:55.229074  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:55.284839  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:55.284877  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:55.312855  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:55.312884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:55.414558  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:55.414580  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:55.414595  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:55.439435  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:55.439472  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:57.966587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:57.977332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:57.977425  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:58.009109  546345 cri.go:89] found id: ""
	I1202 22:30:58.009146  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.009155  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:58.009162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:58.009277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:58.034957  546345 cri.go:89] found id: ""
	I1202 22:30:58.034980  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.034989  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:58.034996  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:58.035075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:58.059651  546345 cri.go:89] found id: ""
	I1202 22:30:58.059677  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.059687  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:58.059694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:58.059754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:58.092476  546345 cri.go:89] found id: ""
	I1202 22:30:58.092510  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.092520  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:58.092527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:58.092601  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:58.116505  546345 cri.go:89] found id: ""
	I1202 22:30:58.116531  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.116539  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:58.116545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:58.116617  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:58.141152  546345 cri.go:89] found id: ""
	I1202 22:30:58.141180  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.141189  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:58.141196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:58.141252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:58.167272  546345 cri.go:89] found id: ""
	I1202 22:30:58.167294  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.167302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:58.167308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:58.167365  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:58.193236  546345 cri.go:89] found id: ""
	I1202 22:30:58.193311  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.193334  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:58.193351  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:58.193374  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:58.248292  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:58.248365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:58.263580  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:58.263610  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:58.374750  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:58.374772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:58.374784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:58.401522  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:58.401558  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:00.931781  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:00.941965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:00.942042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:00.966926  546345 cri.go:89] found id: ""
	I1202 22:31:00.966950  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.966958  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:00.966965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:00.967026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:00.991438  546345 cri.go:89] found id: ""
	I1202 22:31:00.991463  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.991472  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:00.991479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:00.991538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:01.019713  546345 cri.go:89] found id: ""
	I1202 22:31:01.019737  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.019745  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:01.019752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:01.019809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:01.044143  546345 cri.go:89] found id: ""
	I1202 22:31:01.044166  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.044174  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:01.044181  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:01.044240  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:01.069071  546345 cri.go:89] found id: ""
	I1202 22:31:01.069094  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.069102  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:01.069109  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:01.069170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:01.097613  546345 cri.go:89] found id: ""
	I1202 22:31:01.097639  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.097648  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:01.097688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:01.097754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:01.124227  546345 cri.go:89] found id: ""
	I1202 22:31:01.124251  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.124260  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:01.124267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:01.124329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:01.150457  546345 cri.go:89] found id: ""
	I1202 22:31:01.150483  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.150491  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:01.150501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:01.150512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:01.175721  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:01.175753  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:01.204876  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:01.204907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:01.261532  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:01.261567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:01.277504  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:01.277531  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:01.369721  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:03.870061  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:03.880451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:03.880522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:03.903663  546345 cri.go:89] found id: ""
	I1202 22:31:03.903688  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.903698  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:03.903704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:03.903767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:03.927883  546345 cri.go:89] found id: ""
	I1202 22:31:03.927904  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.927913  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:03.927920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:03.927982  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:03.952301  546345 cri.go:89] found id: ""
	I1202 22:31:03.952324  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.952332  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:03.952339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:03.952397  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:03.977367  546345 cri.go:89] found id: ""
	I1202 22:31:03.977390  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.977399  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:03.977406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:03.977465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:04.003308  546345 cri.go:89] found id: ""
	I1202 22:31:04.003336  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.003347  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:04.003361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:04.003438  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:04.030694  546345 cri.go:89] found id: ""
	I1202 22:31:04.030718  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.030731  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:04.030738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:04.030812  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:04.056404  546345 cri.go:89] found id: ""
	I1202 22:31:04.056430  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.056439  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:04.056446  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:04.056506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:04.081740  546345 cri.go:89] found id: ""
	I1202 22:31:04.081762  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.081770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:04.081779  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:04.081792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:04.109259  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:04.109285  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:04.165104  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:04.165137  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:04.181694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:04.181725  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:04.241465  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:04.241493  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:04.241506  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:06.766561  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:06.777372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:06.777445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:06.807209  546345 cri.go:89] found id: ""
	I1202 22:31:06.807235  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.807244  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:06.807251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:06.807356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:06.833401  546345 cri.go:89] found id: ""
	I1202 22:31:06.833424  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.833433  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:06.833439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:06.833497  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:06.858407  546345 cri.go:89] found id: ""
	I1202 22:31:06.858434  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.858442  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:06.858449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:06.858509  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:06.884341  546345 cri.go:89] found id: ""
	I1202 22:31:06.884367  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.884375  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:06.884382  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:06.884445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:06.911764  546345 cri.go:89] found id: ""
	I1202 22:31:06.911787  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.911796  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:06.911802  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:06.911861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:06.940179  546345 cri.go:89] found id: ""
	I1202 22:31:06.940204  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.940217  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:06.940225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:06.940289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:06.965277  546345 cri.go:89] found id: ""
	I1202 22:31:06.965304  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.965313  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:06.965320  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:06.965390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:06.991270  546345 cri.go:89] found id: ""
	I1202 22:31:06.991294  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.991303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:06.991313  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:06.991326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:07.060741  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:07.060762  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:07.060778  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:07.085921  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:07.085970  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:07.113268  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:07.113298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:07.169055  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:07.169092  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:09.686487  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:09.697143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:09.697217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:09.722726  546345 cri.go:89] found id: ""
	I1202 22:31:09.722749  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.722760  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:09.722767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:09.722826  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:09.748226  546345 cri.go:89] found id: ""
	I1202 22:31:09.748251  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.748260  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:09.748267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:09.748327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:09.774010  546345 cri.go:89] found id: ""
	I1202 22:31:09.774035  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.774043  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:09.774050  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:09.774109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:09.800227  546345 cri.go:89] found id: ""
	I1202 22:31:09.800250  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.800259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:09.800266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:09.800328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:09.828744  546345 cri.go:89] found id: ""
	I1202 22:31:09.828768  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.828777  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:09.828784  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:09.828843  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:09.853554  546345 cri.go:89] found id: ""
	I1202 22:31:09.853577  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.853586  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:09.853593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:09.853672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:09.879248  546345 cri.go:89] found id: ""
	I1202 22:31:09.879271  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.879279  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:09.879285  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:09.879350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:09.908338  546345 cri.go:89] found id: ""
	I1202 22:31:09.908364  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.908373  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:09.908383  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:09.908394  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:09.936944  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:09.936974  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:09.993598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:09.993644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:10.010732  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:10.010766  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:10.084652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:10.084677  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:10.084692  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:12.613817  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:12.624680  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:12.624765  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:12.651202  546345 cri.go:89] found id: ""
	I1202 22:31:12.651227  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.651236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:12.651243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:12.651301  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:12.676106  546345 cri.go:89] found id: ""
	I1202 22:31:12.676130  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.676138  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:12.676145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:12.676202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:12.700680  546345 cri.go:89] found id: ""
	I1202 22:31:12.700706  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.700716  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:12.700723  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:12.700787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:12.726023  546345 cri.go:89] found id: ""
	I1202 22:31:12.726049  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.726059  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:12.726066  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:12.726126  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:12.750927  546345 cri.go:89] found id: ""
	I1202 22:31:12.750951  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.750959  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:12.750966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:12.751026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:12.777535  546345 cri.go:89] found id: ""
	I1202 22:31:12.777562  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.777570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:12.777577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:12.777634  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:12.801546  546345 cri.go:89] found id: ""
	I1202 22:31:12.801572  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.801581  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:12.801588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:12.801646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:12.829909  546345 cri.go:89] found id: ""
	I1202 22:31:12.829932  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.829941  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:12.829950  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:12.829961  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:12.859869  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:12.859896  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:12.914732  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:12.914767  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:12.930844  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:12.930875  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:12.995842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:12.995865  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:12.995879  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.522875  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:15.533513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:15.533591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:15.569400  546345 cri.go:89] found id: ""
	I1202 22:31:15.569424  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.569433  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:15.569439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:15.569496  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:15.628130  546345 cri.go:89] found id: ""
	I1202 22:31:15.628152  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.628161  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:15.628167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:15.628228  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:15.653054  546345 cri.go:89] found id: ""
	I1202 22:31:15.653076  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.653085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:15.653092  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:15.653149  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:15.678257  546345 cri.go:89] found id: ""
	I1202 22:31:15.678281  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.678290  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:15.678296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:15.678353  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:15.702830  546345 cri.go:89] found id: ""
	I1202 22:31:15.702856  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.702864  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:15.702871  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:15.702936  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:15.728236  546345 cri.go:89] found id: ""
	I1202 22:31:15.728261  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.728270  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:15.728276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:15.728336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:15.753646  546345 cri.go:89] found id: ""
	I1202 22:31:15.753694  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.753703  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:15.753710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:15.753772  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:15.778069  546345 cri.go:89] found id: ""
	I1202 22:31:15.778092  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.778101  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:15.778110  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:15.778121  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:15.834182  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:15.834217  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:15.850533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:15.850572  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:15.911589  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:15.911609  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:15.911621  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.936945  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:15.936977  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.470112  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:18.480648  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:18.480727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:18.508083  546345 cri.go:89] found id: ""
	I1202 22:31:18.508109  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.508117  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:18.508124  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:18.508252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:18.533123  546345 cri.go:89] found id: ""
	I1202 22:31:18.533149  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.533164  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:18.533172  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:18.533245  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:18.586767  546345 cri.go:89] found id: ""
	I1202 22:31:18.586791  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.586800  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:18.586806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:18.586866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:18.626205  546345 cri.go:89] found id: ""
	I1202 22:31:18.626227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.626236  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:18.626242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:18.626299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:18.653977  546345 cri.go:89] found id: ""
	I1202 22:31:18.653998  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.654007  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:18.654013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:18.654074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:18.679194  546345 cri.go:89] found id: ""
	I1202 22:31:18.679227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.679237  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:18.679244  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:18.679305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:18.704215  546345 cri.go:89] found id: ""
	I1202 22:31:18.704280  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.704305  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:18.704326  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:18.704411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:18.729467  546345 cri.go:89] found id: ""
	I1202 22:31:18.729536  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.729560  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:18.729583  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:18.729624  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:18.745333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:18.745406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:18.810842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:18.810886  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:18.810899  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:18.836014  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:18.836050  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.864189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:18.864230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.420147  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:21.430404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:21.430516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:21.454558  546345 cri.go:89] found id: ""
	I1202 22:31:21.454583  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.454592  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:21.454599  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:21.454658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:21.478328  546345 cri.go:89] found id: ""
	I1202 22:31:21.478360  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.478369  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:21.478377  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:21.478445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:21.502704  546345 cri.go:89] found id: ""
	I1202 22:31:21.502729  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.502737  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:21.502744  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:21.502805  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:21.528175  546345 cri.go:89] found id: ""
	I1202 22:31:21.528201  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.528209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:21.528216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:21.528278  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:21.616594  546345 cri.go:89] found id: ""
	I1202 22:31:21.616622  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.616632  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:21.616638  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:21.616697  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:21.645131  546345 cri.go:89] found id: ""
	I1202 22:31:21.645160  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.645168  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:21.645178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:21.645238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:21.671523  546345 cri.go:89] found id: ""
	I1202 22:31:21.671545  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.671553  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:21.671564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:21.671624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:21.695173  546345 cri.go:89] found id: ""
	I1202 22:31:21.695195  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.695203  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:21.695212  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:21.695222  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:21.719757  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:21.719792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:21.749635  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:21.749681  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.808026  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:21.808062  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:21.823780  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:21.823809  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:21.884457  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.384744  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:24.394799  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:24.394871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:24.420708  546345 cri.go:89] found id: ""
	I1202 22:31:24.420731  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.420740  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:24.420747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:24.420804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:24.444913  546345 cri.go:89] found id: ""
	I1202 22:31:24.444938  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.444947  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:24.444953  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:24.445011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:24.468474  546345 cri.go:89] found id: ""
	I1202 22:31:24.468562  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.468586  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:24.468619  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:24.468712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:24.492364  546345 cri.go:89] found id: ""
	I1202 22:31:24.492435  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.492459  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:24.492479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:24.492570  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:24.517358  546345 cri.go:89] found id: ""
	I1202 22:31:24.517434  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.517473  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:24.517498  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:24.517589  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:24.556717  546345 cri.go:89] found id: ""
	I1202 22:31:24.556800  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.556829  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:24.556870  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:24.556990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:24.641499  546345 cri.go:89] found id: ""
	I1202 22:31:24.641533  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.641542  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:24.641549  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:24.641704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:24.665998  546345 cri.go:89] found id: ""
	I1202 22:31:24.666024  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.666032  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:24.666041  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:24.666053  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:24.720801  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:24.720835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:24.736228  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:24.736255  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:24.802911  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.802934  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:24.802948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:24.826675  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:24.826710  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:27.352424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:27.363728  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:27.363800  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:27.388330  546345 cri.go:89] found id: ""
	I1202 22:31:27.388356  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.388365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:27.388372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:27.388430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:27.412561  546345 cri.go:89] found id: ""
	I1202 22:31:27.412589  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.412598  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:27.412605  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:27.412664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:27.436953  546345 cri.go:89] found id: ""
	I1202 22:31:27.436982  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.436991  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:27.436997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:27.437057  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:27.461746  546345 cri.go:89] found id: ""
	I1202 22:31:27.461775  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.461783  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:27.461790  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:27.461847  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:27.489561  546345 cri.go:89] found id: ""
	I1202 22:31:27.489598  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.489607  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:27.489614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:27.489708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:27.517814  546345 cri.go:89] found id: ""
	I1202 22:31:27.517835  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.517844  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:27.517851  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:27.517909  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:27.545611  546345 cri.go:89] found id: ""
	I1202 22:31:27.545711  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.545734  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:27.545754  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:27.545839  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:27.603444  546345 cri.go:89] found id: ""
	I1202 22:31:27.603466  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.603474  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:27.603484  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:27.603497  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:27.674112  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:27.674149  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:27.690096  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:27.690128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:27.752579  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:27.752604  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:27.752617  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:27.777612  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:27.777647  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.305694  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:30.316225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:30.316348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:30.339916  546345 cri.go:89] found id: ""
	I1202 22:31:30.339950  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.339959  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:30.339974  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:30.340052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:30.369549  546345 cri.go:89] found id: ""
	I1202 22:31:30.369575  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.369584  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:30.369590  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:30.369677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:30.394634  546345 cri.go:89] found id: ""
	I1202 22:31:30.394711  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.394734  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:30.394749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:30.394830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:30.419244  546345 cri.go:89] found id: ""
	I1202 22:31:30.419271  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.419279  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:30.419286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:30.419344  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:30.447382  546345 cri.go:89] found id: ""
	I1202 22:31:30.447414  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.447423  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:30.447430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:30.447530  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:30.471131  546345 cri.go:89] found id: ""
	I1202 22:31:30.471155  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.471163  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:30.471170  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:30.471236  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:30.496091  546345 cri.go:89] found id: ""
	I1202 22:31:30.496116  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.496125  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:30.496132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:30.496209  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:30.520739  546345 cri.go:89] found id: ""
	I1202 22:31:30.520767  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.520775  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:30.520785  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:30.520796  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:30.549966  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:30.550055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.602152  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:30.602176  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:30.668135  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:30.668172  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:30.683585  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:30.683653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:30.747838  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:33.249502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:33.259480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:33.259551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:33.282766  546345 cri.go:89] found id: ""
	I1202 22:31:33.282791  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.282799  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:33.282806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:33.282866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:33.308495  546345 cri.go:89] found id: ""
	I1202 22:31:33.308518  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.308533  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:33.308540  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:33.308597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:33.331979  546345 cri.go:89] found id: ""
	I1202 22:31:33.332013  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.332023  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:33.332030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:33.332100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:33.356278  546345 cri.go:89] found id: ""
	I1202 22:31:33.356304  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.356313  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:33.356319  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:33.356378  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:33.384857  546345 cri.go:89] found id: ""
	I1202 22:31:33.384885  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.384893  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:33.384900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:33.384959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:33.409699  546345 cri.go:89] found id: ""
	I1202 22:31:33.409727  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.409735  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:33.409742  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:33.409818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:33.433952  546345 cri.go:89] found id: ""
	I1202 22:31:33.433976  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.433984  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:33.433991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:33.434048  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:33.457225  546345 cri.go:89] found id: ""
	I1202 22:31:33.457250  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.457265  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:33.457274  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:33.457286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:33.481072  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:33.481106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:33.513367  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:33.513402  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:33.575454  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:33.575500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:33.611865  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:33.611895  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:33.687837  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.188106  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:36.198524  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:36.198595  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:36.227262  546345 cri.go:89] found id: ""
	I1202 22:31:36.227286  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.227294  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:36.227301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:36.227364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:36.251229  546345 cri.go:89] found id: ""
	I1202 22:31:36.251254  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.251262  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:36.251269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:36.251328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:36.280094  546345 cri.go:89] found id: ""
	I1202 22:31:36.280118  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.280128  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:36.280135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:36.280192  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:36.303557  546345 cri.go:89] found id: ""
	I1202 22:31:36.303589  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.303598  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:36.303606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:36.303680  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:36.328036  546345 cri.go:89] found id: ""
	I1202 22:31:36.328099  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.328110  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:36.328117  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:36.328210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:36.352844  546345 cri.go:89] found id: ""
	I1202 22:31:36.352919  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.352942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:36.352963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:36.353076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:36.377059  546345 cri.go:89] found id: ""
	I1202 22:31:36.377123  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.377148  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:36.377169  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:36.377299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:36.406912  546345 cri.go:89] found id: ""
	I1202 22:31:36.406939  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.406947  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:36.406957  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:36.406969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:36.462620  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:36.462655  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:36.478602  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:36.478633  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:36.553409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.553440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:36.553453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:36.605527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:36.605567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.147765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:39.158330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:39.158399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:39.184185  546345 cri.go:89] found id: ""
	I1202 22:31:39.184211  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.184220  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:39.184227  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:39.184286  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:39.211366  546345 cri.go:89] found id: ""
	I1202 22:31:39.211390  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.211399  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:39.211405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:39.211465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:39.239810  546345 cri.go:89] found id: ""
	I1202 22:31:39.239836  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.239846  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:39.239853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:39.239914  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:39.264259  546345 cri.go:89] found id: ""
	I1202 22:31:39.264285  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.264294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:39.264300  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:39.264357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:39.288356  546345 cri.go:89] found id: ""
	I1202 22:31:39.288384  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.288394  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:39.288400  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:39.288459  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:39.312721  546345 cri.go:89] found id: ""
	I1202 22:31:39.312745  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.312754  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:39.312760  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:39.312817  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:39.337724  546345 cri.go:89] found id: ""
	I1202 22:31:39.337748  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.337756  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:39.337762  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:39.337821  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:39.362280  546345 cri.go:89] found id: ""
	I1202 22:31:39.362303  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.362311  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:39.362320  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:39.362332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.389401  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:39.389425  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:39.449427  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:39.449471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:39.464867  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:39.464897  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:39.527654  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:39.527675  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:39.527691  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.058126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:42.070220  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:42.070305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:42.113161  546345 cri.go:89] found id: ""
	I1202 22:31:42.113187  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.113197  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:42.113205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:42.113279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:42.151146  546345 cri.go:89] found id: ""
	I1202 22:31:42.151178  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.151188  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:42.151195  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:42.151267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:42.187923  546345 cri.go:89] found id: ""
	I1202 22:31:42.187951  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.187960  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:42.187968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:42.188040  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:42.222980  546345 cri.go:89] found id: ""
	I1202 22:31:42.223003  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.223012  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:42.223020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:42.223088  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:42.271018  546345 cri.go:89] found id: ""
	I1202 22:31:42.271046  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.271056  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:42.271064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:42.271136  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:42.302817  546345 cri.go:89] found id: ""
	I1202 22:31:42.302893  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.302913  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:42.302929  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:42.303020  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:42.333498  546345 cri.go:89] found id: ""
	I1202 22:31:42.333526  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.333535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:42.333543  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:42.333630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:42.363457  546345 cri.go:89] found id: ""
	I1202 22:31:42.363485  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.363495  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:42.363505  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:42.363518  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:42.421844  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:42.421883  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:42.439113  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:42.439145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:42.506768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:42.506791  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:42.506803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.531455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:42.531491  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:45.076035  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:45.089323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:45.089414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:45.121410  546345 cri.go:89] found id: ""
	I1202 22:31:45.121436  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.121445  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:45.121454  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:45.121523  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:45.158421  546345 cri.go:89] found id: ""
	I1202 22:31:45.158452  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.158461  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:45.158840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:45.158933  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:45.226744  546345 cri.go:89] found id: ""
	I1202 22:31:45.226769  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.226778  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:45.226785  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:45.226855  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:45.277464  546345 cri.go:89] found id: ""
	I1202 22:31:45.277540  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.277560  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:45.277573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:45.277920  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:45.321564  546345 cri.go:89] found id: ""
	I1202 22:31:45.321591  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.321600  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:45.321607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:45.321695  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:45.349203  546345 cri.go:89] found id: ""
	I1202 22:31:45.349228  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.349236  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:45.349243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:45.349302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:45.377967  546345 cri.go:89] found id: ""
	I1202 22:31:45.377993  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.378001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:45.378009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:45.378068  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:45.404655  546345 cri.go:89] found id: ""
	I1202 22:31:45.404680  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.404689  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:45.404697  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:45.404709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:45.459390  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:45.459424  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:45.474938  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:45.474964  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:45.551857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:45.551880  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:45.551893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:45.599545  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:45.599577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.142376  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:48.152835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:48.152910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:48.176887  546345 cri.go:89] found id: ""
	I1202 22:31:48.176913  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.176921  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:48.176928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:48.176992  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:48.199841  546345 cri.go:89] found id: ""
	I1202 22:31:48.199865  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.199873  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:48.199879  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:48.199937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:48.223323  546345 cri.go:89] found id: ""
	I1202 22:31:48.223346  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.223354  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:48.223361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:48.223419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:48.246053  546345 cri.go:89] found id: ""
	I1202 22:31:48.246079  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.246088  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:48.246095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:48.246152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:48.269713  546345 cri.go:89] found id: ""
	I1202 22:31:48.269739  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.269748  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:48.269755  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:48.269811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:48.295336  546345 cri.go:89] found id: ""
	I1202 22:31:48.295359  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.295368  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:48.295374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:48.295435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:48.318964  546345 cri.go:89] found id: ""
	I1202 22:31:48.318989  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.319001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:48.319009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:48.319114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:48.342776  546345 cri.go:89] found id: ""
	I1202 22:31:48.342803  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.342812  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:48.342821  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:48.342834  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:48.366473  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:48.366507  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.397880  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:48.397907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:48.453030  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:48.453066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:48.468428  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:48.468455  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:48.530252  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.030539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:51.041072  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:51.041139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:51.064958  546345 cri.go:89] found id: ""
	I1202 22:31:51.064986  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.064994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:51.065004  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:51.065074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:51.090247  546345 cri.go:89] found id: ""
	I1202 22:31:51.090275  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.090284  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:51.090290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:51.090356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:51.117187  546345 cri.go:89] found id: ""
	I1202 22:31:51.117224  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.117235  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:51.117242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:51.117326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:51.143456  546345 cri.go:89] found id: ""
	I1202 22:31:51.143483  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.143492  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:51.143499  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:51.143563  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:51.169463  546345 cri.go:89] found id: ""
	I1202 22:31:51.169542  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.169565  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:51.169587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:51.169719  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:51.195977  546345 cri.go:89] found id: ""
	I1202 22:31:51.196019  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.196028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:51.196035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:51.196105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:51.221006  546345 cri.go:89] found id: ""
	I1202 22:31:51.221030  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.221045  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:51.221051  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:51.221119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:51.245434  546345 cri.go:89] found id: ""
	I1202 22:31:51.245457  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.245466  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:51.245475  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:51.245486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:51.273171  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:51.273198  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:51.328523  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:51.328562  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:51.344211  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:51.344238  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:51.405812  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.405843  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:51.405859  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:53.930346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:53.940572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:53.940646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:53.968500  546345 cri.go:89] found id: ""
	I1202 22:31:53.968531  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.968540  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:53.968547  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:53.968605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:53.993271  546345 cri.go:89] found id: ""
	I1202 22:31:53.993298  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.993306  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:53.993314  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:53.993372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:54.020928  546345 cri.go:89] found id: ""
	I1202 22:31:54.020956  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.020965  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:54.020973  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:54.021039  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:54.047236  546345 cri.go:89] found id: ""
	I1202 22:31:54.047260  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.047269  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:54.047276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:54.047336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:54.072186  546345 cri.go:89] found id: ""
	I1202 22:31:54.072219  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.072228  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:54.072235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:54.072310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:54.097358  546345 cri.go:89] found id: ""
	I1202 22:31:54.097390  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.097400  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:54.097407  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:54.097484  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:54.122635  546345 cri.go:89] found id: ""
	I1202 22:31:54.122739  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.122765  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:54.122787  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:54.122881  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:54.147140  546345 cri.go:89] found id: ""
	I1202 22:31:54.147205  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.147228  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:54.147244  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:54.147257  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:54.209277  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:54.209298  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:54.209312  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:54.233525  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:54.233564  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:54.267595  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:54.267623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:54.322957  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:54.322991  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:56.839135  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:56.854872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:56.854954  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:56.883302  546345 cri.go:89] found id: ""
	I1202 22:31:56.883327  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.883335  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:56.883342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:56.883400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:56.909437  546345 cri.go:89] found id: ""
	I1202 22:31:56.909478  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.909495  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:56.909502  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:56.909574  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:56.935567  546345 cri.go:89] found id: ""
	I1202 22:31:56.935592  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.935600  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:56.935607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:56.935700  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:56.962296  546345 cri.go:89] found id: ""
	I1202 22:31:56.962322  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.962339  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:56.962352  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:56.962417  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:56.987308  546345 cri.go:89] found id: ""
	I1202 22:31:56.987333  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.987341  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:56.987348  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:56.987409  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:57.017409  546345 cri.go:89] found id: ""
	I1202 22:31:57.017436  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.017444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:57.017451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:57.017519  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:57.043570  546345 cri.go:89] found id: ""
	I1202 22:31:57.043593  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.043601  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:57.043607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:57.043670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:57.068973  546345 cri.go:89] found id: ""
	I1202 22:31:57.069005  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.069014  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:57.069023  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:57.069034  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:57.093239  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:57.093275  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:57.120751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:57.120777  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:57.176173  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:57.176209  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:57.193001  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:57.193035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:57.259032  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:59.760716  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:59.771290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:59.771364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:59.819477  546345 cri.go:89] found id: ""
	I1202 22:31:59.819507  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.819521  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:59.819528  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:59.819609  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:59.879132  546345 cri.go:89] found id: ""
	I1202 22:31:59.879159  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.879168  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:59.879175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:59.879235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:59.909985  546345 cri.go:89] found id: ""
	I1202 22:31:59.910011  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.910020  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:59.910027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:59.910083  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:59.934326  546345 cri.go:89] found id: ""
	I1202 22:31:59.934350  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.934359  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:59.934366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:59.934424  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:59.963200  546345 cri.go:89] found id: ""
	I1202 22:31:59.963224  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.963233  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:59.963240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:59.963327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:59.989148  546345 cri.go:89] found id: ""
	I1202 22:31:59.989180  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.989190  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:59.989196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:59.989302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:00.074954  546345 cri.go:89] found id: ""
	I1202 22:32:00.075036  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.075063  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:00.075085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:00.075215  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:00.226233  546345 cri.go:89] found id: ""
	I1202 22:32:00.226259  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.226269  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:00.226279  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:00.226293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:00.336324  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:00.336441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:00.371299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:00.371905  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:00.484267  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:00.484297  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:00.484311  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:00.512091  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:00.512128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.068479  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:03.078801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:03.078893  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:03.103732  546345 cri.go:89] found id: ""
	I1202 22:32:03.103758  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.103766  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:03.103773  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:03.103832  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:03.128397  546345 cri.go:89] found id: ""
	I1202 22:32:03.128426  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.128435  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:03.128441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:03.128501  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:03.153803  546345 cri.go:89] found id: ""
	I1202 22:32:03.153877  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.153899  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:03.153913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:03.153988  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:03.181014  546345 cri.go:89] found id: ""
	I1202 22:32:03.181038  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.181047  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:03.181053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:03.181152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:03.210807  546345 cri.go:89] found id: ""
	I1202 22:32:03.210834  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.210843  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:03.210850  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:03.210911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:03.239226  546345 cri.go:89] found id: ""
	I1202 22:32:03.239251  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.239260  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:03.239267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:03.239326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:03.263944  546345 cri.go:89] found id: ""
	I1202 22:32:03.263969  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.263978  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:03.263984  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:03.264044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:03.287558  546345 cri.go:89] found id: ""
	I1202 22:32:03.287583  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.287592  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:03.287601  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:03.287612  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:03.311743  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:03.311776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.343056  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:03.343083  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:03.397595  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:03.397629  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:03.413119  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:03.413155  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:03.475280  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:05.975590  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:05.985554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:05.985622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:06.019132  546345 cri.go:89] found id: ""
	I1202 22:32:06.019157  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.019166  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:06.019173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:06.019241  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:06.044254  546345 cri.go:89] found id: ""
	I1202 22:32:06.044277  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.044286  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:06.044293  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:06.044357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:06.073518  546345 cri.go:89] found id: ""
	I1202 22:32:06.073541  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.073550  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:06.073556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:06.073619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:06.103333  546345 cri.go:89] found id: ""
	I1202 22:32:06.103400  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.103431  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:06.103450  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:06.103539  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:06.129000  546345 cri.go:89] found id: ""
	I1202 22:32:06.129036  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.129051  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:06.129058  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:06.129128  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:06.155243  546345 cri.go:89] found id: ""
	I1202 22:32:06.155266  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.155274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:06.155281  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:06.155341  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:06.183834  546345 cri.go:89] found id: ""
	I1202 22:32:06.183900  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.183923  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:06.183942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:06.184033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:06.208508  546345 cri.go:89] found id: ""
	I1202 22:32:06.208546  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.208556  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:06.208566  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:06.208578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:06.265928  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:06.265966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:06.281782  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:06.281811  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:06.341568  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:06.341591  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:06.341603  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:06.366403  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:06.366435  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:08.899765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:08.910234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:08.910306  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:08.940951  546345 cri.go:89] found id: ""
	I1202 22:32:08.940979  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.940989  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:08.940995  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:08.941054  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:08.966172  546345 cri.go:89] found id: ""
	I1202 22:32:08.966198  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.966207  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:08.966214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:08.966274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:08.990534  546345 cri.go:89] found id: ""
	I1202 22:32:08.990561  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.990569  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:08.990576  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:08.990633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:09.016942  546345 cri.go:89] found id: ""
	I1202 22:32:09.016970  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.016979  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:09.016986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:09.017052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:09.040852  546345 cri.go:89] found id: ""
	I1202 22:32:09.040893  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.040902  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:09.040909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:09.040978  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:09.064884  546345 cri.go:89] found id: ""
	I1202 22:32:09.064958  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.064986  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:09.065005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:09.065114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:09.088807  546345 cri.go:89] found id: ""
	I1202 22:32:09.088878  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.088903  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:09.088922  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:09.089011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:09.115024  546345 cri.go:89] found id: ""
	I1202 22:32:09.115051  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.115060  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:09.115069  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:09.115080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:09.138651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:09.138687  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:09.165425  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:09.165449  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:09.222720  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:09.222752  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:09.238413  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:09.238441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:09.299159  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:11.799390  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:11.813803  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:11.813890  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:11.853262  546345 cri.go:89] found id: ""
	I1202 22:32:11.853298  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.853311  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:11.853318  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:11.853394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:11.898451  546345 cri.go:89] found id: ""
	I1202 22:32:11.898474  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.898482  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:11.898489  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:11.898549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:11.926743  546345 cri.go:89] found id: ""
	I1202 22:32:11.926817  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.926840  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:11.926860  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:11.926980  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:11.950985  546345 cri.go:89] found id: ""
	I1202 22:32:11.951011  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.951019  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:11.951027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:11.951106  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:11.975373  546345 cri.go:89] found id: ""
	I1202 22:32:11.975399  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.975407  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:11.975414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:11.975490  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:12.005482  546345 cri.go:89] found id: ""
	I1202 22:32:12.005511  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.005521  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:12.005529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:12.005643  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:12.032572  546345 cri.go:89] found id: ""
	I1202 22:32:12.032597  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.032607  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:12.032634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:12.032733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:12.059401  546345 cri.go:89] found id: ""
	I1202 22:32:12.059476  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.059492  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:12.059504  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:12.059517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:12.093142  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:12.093179  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:12.150021  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:12.150054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:12.165956  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:12.165987  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:12.231857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:12.231929  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:12.231956  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:14.756725  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:14.767263  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:14.767333  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:14.801672  546345 cri.go:89] found id: ""
	I1202 22:32:14.801697  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.801706  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:14.801713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:14.801770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:14.851488  546345 cri.go:89] found id: ""
	I1202 22:32:14.851517  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.851532  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:14.851538  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:14.851605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:14.888023  546345 cri.go:89] found id: ""
	I1202 22:32:14.888048  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.888057  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:14.888064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:14.888129  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:14.916001  546345 cri.go:89] found id: ""
	I1202 22:32:14.916053  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.916061  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:14.916068  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:14.916135  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:14.942133  546345 cri.go:89] found id: ""
	I1202 22:32:14.942199  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.942222  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:14.942240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:14.942326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:14.967663  546345 cri.go:89] found id: ""
	I1202 22:32:14.967694  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.967702  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:14.967710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:14.967779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:14.997283  546345 cri.go:89] found id: ""
	I1202 22:32:14.997360  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.997398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:14.997424  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:14.997514  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:15.028362  546345 cri.go:89] found id: ""
	I1202 22:32:15.028443  546345 logs.go:282] 0 containers: []
	W1202 22:32:15.028481  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:15.028510  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:15.028577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:15.084989  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:15.085026  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:15.101099  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:15.101135  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:15.163640  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:15.163661  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:15.163673  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:15.188815  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:15.188850  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:17.720502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:17.730835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:17.730906  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:17.754959  546345 cri.go:89] found id: ""
	I1202 22:32:17.754985  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.754994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:17.755001  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:17.755058  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:17.779124  546345 cri.go:89] found id: ""
	I1202 22:32:17.779145  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.779153  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:17.779159  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:17.779216  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:17.861624  546345 cri.go:89] found id: ""
	I1202 22:32:17.861647  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.861670  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:17.861676  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:17.861733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:17.891578  546345 cri.go:89] found id: ""
	I1202 22:32:17.891604  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.891612  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:17.891620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:17.891677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:17.914983  546345 cri.go:89] found id: ""
	I1202 22:32:17.915005  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.915013  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:17.915019  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:17.915075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:17.938893  546345 cri.go:89] found id: ""
	I1202 22:32:17.938923  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.938932  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:17.938939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:17.938997  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:17.964896  546345 cri.go:89] found id: ""
	I1202 22:32:17.964960  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.964983  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:17.964997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:17.965076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:17.988828  546345 cri.go:89] found id: ""
	I1202 22:32:17.988863  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.988872  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:17.988882  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:17.988893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:18.022032  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:18.022059  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:18.077598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:18.077635  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:18.095143  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:18.095184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:18.157395  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:18.157426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:18.157439  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:20.681946  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:20.692713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:20.692790  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:20.716255  546345 cri.go:89] found id: ""
	I1202 22:32:20.716281  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.716290  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:20.716297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:20.716355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:20.743603  546345 cri.go:89] found id: ""
	I1202 22:32:20.743629  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.743638  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:20.743645  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:20.743705  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:20.768770  546345 cri.go:89] found id: ""
	I1202 22:32:20.768798  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.768807  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:20.768814  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:20.768878  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:20.805921  546345 cri.go:89] found id: ""
	I1202 22:32:20.805945  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.805954  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:20.805960  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:20.806018  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:20.882456  546345 cri.go:89] found id: ""
	I1202 22:32:20.882478  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.882486  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:20.882493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:20.882548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:20.906709  546345 cri.go:89] found id: ""
	I1202 22:32:20.906732  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.906740  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:20.906747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:20.906803  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:20.930871  546345 cri.go:89] found id: ""
	I1202 22:32:20.930947  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.930970  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:20.930985  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:20.931072  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:20.954799  546345 cri.go:89] found id: ""
	I1202 22:32:20.954823  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.954832  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:20.954841  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:20.954853  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:20.982221  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:20.982253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:21.038726  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:21.038763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:21.054186  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:21.054213  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:21.118780  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:21.118846  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:21.118868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.643583  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:23.655825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:23.655896  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:23.680044  546345 cri.go:89] found id: ""
	I1202 22:32:23.680070  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.680079  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:23.680085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:23.680143  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:23.708984  546345 cri.go:89] found id: ""
	I1202 22:32:23.709009  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.709017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:23.709024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:23.709082  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:23.734044  546345 cri.go:89] found id: ""
	I1202 22:32:23.734068  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.734076  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:23.734082  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:23.734142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:23.763083  546345 cri.go:89] found id: ""
	I1202 22:32:23.763110  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.763118  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:23.763125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:23.763183  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:23.809231  546345 cri.go:89] found id: ""
	I1202 22:32:23.809254  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.809262  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:23.809269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:23.809328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:23.877561  546345 cri.go:89] found id: ""
	I1202 22:32:23.877585  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.877593  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:23.877600  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:23.877685  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:23.900843  546345 cri.go:89] found id: ""
	I1202 22:32:23.900870  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.900879  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:23.900885  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:23.900948  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:23.926458  546345 cri.go:89] found id: ""
	I1202 22:32:23.926497  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.926506  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:23.926515  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:23.926526  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.951259  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:23.951296  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:23.979352  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:23.979421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:24.036927  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:24.036965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:24.052889  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:24.052925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:24.114973  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.615216  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:26.625455  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:26.625533  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:26.651390  546345 cri.go:89] found id: ""
	I1202 22:32:26.651423  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.651432  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:26.651439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:26.651508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:26.677027  546345 cri.go:89] found id: ""
	I1202 22:32:26.677052  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.677060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:26.677067  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:26.677127  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:26.706368  546345 cri.go:89] found id: ""
	I1202 22:32:26.706391  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.706400  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:26.706406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:26.706469  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:26.730421  546345 cri.go:89] found id: ""
	I1202 22:32:26.730445  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.730453  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:26.730460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:26.730525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:26.754523  546345 cri.go:89] found id: ""
	I1202 22:32:26.754552  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.754561  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:26.754569  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:26.754633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:26.779516  546345 cri.go:89] found id: ""
	I1202 22:32:26.779545  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.779554  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:26.779568  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:26.779632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:26.823212  546345 cri.go:89] found id: ""
	I1202 22:32:26.823237  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.823246  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:26.823253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:26.823313  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:26.858245  546345 cri.go:89] found id: ""
	I1202 22:32:26.858282  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.858291  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:26.858300  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:26.858313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:26.917465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:26.917500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:26.933252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:26.933281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:26.995404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.995426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:26.995438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:27.021457  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:27.021490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.552148  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:29.562514  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:29.562594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:29.587012  546345 cri.go:89] found id: ""
	I1202 22:32:29.587037  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.587046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:29.587079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:29.587163  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:29.613219  546345 cri.go:89] found id: ""
	I1202 22:32:29.613246  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.613254  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:29.613261  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:29.613321  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:29.638585  546345 cri.go:89] found id: ""
	I1202 22:32:29.638611  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.638619  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:29.638626  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:29.638682  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:29.663132  546345 cri.go:89] found id: ""
	I1202 22:32:29.663208  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.663225  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:29.663232  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:29.663304  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:29.686925  546345 cri.go:89] found id: ""
	I1202 22:32:29.686947  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.686955  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:29.686961  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:29.687021  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:29.711947  546345 cri.go:89] found id: ""
	I1202 22:32:29.711971  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.711979  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:29.711986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:29.712047  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:29.735873  546345 cri.go:89] found id: ""
	I1202 22:32:29.735940  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.735962  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:29.735988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:29.736071  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:29.764629  546345 cri.go:89] found id: ""
	I1202 22:32:29.764655  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.764664  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:29.764674  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:29.764685  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:29.789251  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:29.789289  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.859060  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:29.859085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:29.927618  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:29.927653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:29.944397  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:29.944477  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:30.015300  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.515559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:32.525887  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:32.525957  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:32.549813  546345 cri.go:89] found id: ""
	I1202 22:32:32.549848  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.549857  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:32.549865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:32.549931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:32.575230  546345 cri.go:89] found id: ""
	I1202 22:32:32.575253  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.575261  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:32.575268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:32.575359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:32.600349  546345 cri.go:89] found id: ""
	I1202 22:32:32.600374  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.600382  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:32.600389  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:32.600448  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:32.629053  546345 cri.go:89] found id: ""
	I1202 22:32:32.629078  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.629086  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:32.629095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:32.629152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:32.653727  546345 cri.go:89] found id: ""
	I1202 22:32:32.653750  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.653759  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:32.653766  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:32.653824  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:32.677981  546345 cri.go:89] found id: ""
	I1202 22:32:32.678019  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.678028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:32.678035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:32.678101  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:32.702199  546345 cri.go:89] found id: ""
	I1202 22:32:32.702222  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.702230  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:32.702237  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:32.702294  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:32.725924  546345 cri.go:89] found id: ""
	I1202 22:32:32.725957  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.725967  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:32.725976  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:32.726002  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:32.779589  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:32.779623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:32.807508  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:32.807541  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:32.902366  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.902386  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:32.902399  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:32.925648  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:32.925948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:35.456822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:35.467636  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:35.467796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:35.496302  546345 cri.go:89] found id: ""
	I1202 22:32:35.496328  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.496337  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:35.496343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:35.496407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:35.525080  546345 cri.go:89] found id: ""
	I1202 22:32:35.525107  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.525116  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:35.525122  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:35.525187  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:35.549407  546345 cri.go:89] found id: ""
	I1202 22:32:35.549432  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.549441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:35.549447  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:35.549505  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:35.574018  546345 cri.go:89] found id: ""
	I1202 22:32:35.574040  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.574049  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:35.574056  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:35.574115  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:35.604104  546345 cri.go:89] found id: ""
	I1202 22:32:35.604128  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.604137  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:35.604143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:35.604201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:35.629312  546345 cri.go:89] found id: ""
	I1202 22:32:35.629346  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.629355  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:35.629361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:35.629427  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:35.653959  546345 cri.go:89] found id: ""
	I1202 22:32:35.653987  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.653996  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:35.654003  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:35.654064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:35.678225  546345 cri.go:89] found id: ""
	I1202 22:32:35.678301  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.678325  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:35.678343  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:35.678368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:35.733851  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:35.733884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:35.749526  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:35.749554  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:35.844900  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:35.844925  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:35.844940  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:35.882135  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:35.882168  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:38.412949  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:38.423327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:38.423399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:38.447069  546345 cri.go:89] found id: ""
	I1202 22:32:38.447097  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.447107  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:38.447148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:38.447205  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:38.473526  546345 cri.go:89] found id: ""
	I1202 22:32:38.473549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.473558  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:38.473565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:38.473626  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:38.501943  546345 cri.go:89] found id: ""
	I1202 22:32:38.501974  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.501984  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:38.501990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:38.502049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:38.526634  546345 cri.go:89] found id: ""
	I1202 22:32:38.526657  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.526666  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:38.526672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:38.526730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:38.555523  546345 cri.go:89] found id: ""
	I1202 22:32:38.555549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.555558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:38.555564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:38.555622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:38.579778  546345 cri.go:89] found id: ""
	I1202 22:32:38.579804  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.579812  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:38.579819  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:38.579875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:38.605528  546345 cri.go:89] found id: ""
	I1202 22:32:38.605589  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.605613  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:38.605633  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:38.605733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:38.629391  546345 cri.go:89] found id: ""
	I1202 22:32:38.629412  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.629421  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:38.629429  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:38.629441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:38.684729  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:38.684763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:38.699841  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:38.699916  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:38.767359  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:38.767378  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:38.767391  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:38.792073  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:38.792104  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.385000  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:41.395673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:41.395741  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:41.420534  546345 cri.go:89] found id: ""
	I1202 22:32:41.420574  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.420586  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:41.420593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:41.420652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:41.445534  546345 cri.go:89] found id: ""
	I1202 22:32:41.445559  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.445567  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:41.445573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:41.445635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:41.470438  546345 cri.go:89] found id: ""
	I1202 22:32:41.470463  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.470473  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:41.470481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:41.470551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:41.495013  546345 cri.go:89] found id: ""
	I1202 22:32:41.495037  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.495045  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:41.495052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:41.495139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:41.520340  546345 cri.go:89] found id: ""
	I1202 22:32:41.520375  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.520385  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:41.520392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:41.520488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:41.545599  546345 cri.go:89] found id: ""
	I1202 22:32:41.545633  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.545642  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:41.545649  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:41.545753  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:41.570203  546345 cri.go:89] found id: ""
	I1202 22:32:41.570227  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.570235  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:41.570241  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:41.570317  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:41.595416  546345 cri.go:89] found id: ""
	I1202 22:32:41.595442  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.595451  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:41.595461  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:41.595493  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.622428  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:41.622456  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:41.678602  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:41.678634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:41.694624  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:41.694654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:41.757051  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:41.757072  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:41.757085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.281854  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:44.292430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:44.292510  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:44.317241  546345 cri.go:89] found id: ""
	I1202 22:32:44.317271  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.317279  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:44.317286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:44.317350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:44.341824  546345 cri.go:89] found id: ""
	I1202 22:32:44.341849  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.341857  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:44.341865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:44.341926  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:44.366036  546345 cri.go:89] found id: ""
	I1202 22:32:44.366061  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.366070  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:44.366077  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:44.366139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:44.391175  546345 cri.go:89] found id: ""
	I1202 22:32:44.391200  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.391209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:44.391216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:44.391292  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:44.420090  546345 cri.go:89] found id: ""
	I1202 22:32:44.420123  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.420132  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:44.420155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:44.420234  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:44.444490  546345 cri.go:89] found id: ""
	I1202 22:32:44.444540  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.444549  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:44.444557  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:44.444612  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:44.470392  546345 cri.go:89] found id: ""
	I1202 22:32:44.470419  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.470427  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:44.470434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:44.470493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:44.495601  546345 cri.go:89] found id: ""
	I1202 22:32:44.495624  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.495633  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:44.495664  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:44.495690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:44.549795  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:44.549886  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:44.567082  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:44.567110  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:44.632540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:44.632570  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:44.632582  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.657144  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:44.657180  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.185793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.196271  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:47.196339  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:47.226550  546345 cri.go:89] found id: ""
	I1202 22:32:47.226572  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.226581  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:47.226588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:47.226645  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:47.250706  546345 cri.go:89] found id: ""
	I1202 22:32:47.250732  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.250741  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:47.250748  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:47.250811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:47.280047  546345 cri.go:89] found id: ""
	I1202 22:32:47.280072  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.280081  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:47.280088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:47.280154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:47.306607  546345 cri.go:89] found id: ""
	I1202 22:32:47.306633  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.306642  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:47.306651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:47.306718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:47.330953  546345 cri.go:89] found id: ""
	I1202 22:32:47.331024  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.331038  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:47.331045  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:47.331105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:47.360182  546345 cri.go:89] found id: ""
	I1202 22:32:47.360206  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.360215  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:47.360222  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:47.360293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:47.388010  546345 cri.go:89] found id: ""
	I1202 22:32:47.388032  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.388041  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:47.388048  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:47.388114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:47.415262  546345 cri.go:89] found id: ""
	I1202 22:32:47.415294  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.415303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:47.415312  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:47.415326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:47.433260  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:47.433288  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:47.497337  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:47.497366  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:47.497378  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:47.521722  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:47.521801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.548995  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:47.549027  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:50.107291  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.119155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:50.119230  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:50.144228  546345 cri.go:89] found id: ""
	I1202 22:32:50.144252  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.144261  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:50.144268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:50.144329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:50.172928  546345 cri.go:89] found id: ""
	I1202 22:32:50.172951  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.172959  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:50.172966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:50.173027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:50.201752  546345 cri.go:89] found id: ""
	I1202 22:32:50.201795  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.201804  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:50.201811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:50.201873  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:50.225118  546345 cri.go:89] found id: ""
	I1202 22:32:50.225139  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.225148  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:50.225154  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:50.225217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:50.251396  546345 cri.go:89] found id: ""
	I1202 22:32:50.251421  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.251430  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:50.251437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:50.251495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:50.278859  546345 cri.go:89] found id: ""
	I1202 22:32:50.278887  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.278896  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:50.278903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:50.278961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:50.302858  546345 cri.go:89] found id: ""
	I1202 22:32:50.302891  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.302900  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:50.302907  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:50.302972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:50.330618  546345 cri.go:89] found id: ""
	I1202 22:32:50.330642  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.330650  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:50.330659  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:50.330670  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:50.347121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:50.347147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:50.414460  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:50.414482  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:50.414496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:50.438651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:50.438682  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:50.466506  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:50.466532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.022126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.032606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:53.032678  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:53.068046  546345 cri.go:89] found id: ""
	I1202 22:32:53.068078  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.068088  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:53.068095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:53.068154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:53.130393  546345 cri.go:89] found id: ""
	I1202 22:32:53.130414  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.130423  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:53.130429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:53.130488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:53.156458  546345 cri.go:89] found id: ""
	I1202 22:32:53.156481  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.156498  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:53.156504  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:53.156564  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:53.180994  546345 cri.go:89] found id: ""
	I1202 22:32:53.181067  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.181090  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:53.181110  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:53.181196  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:53.204951  546345 cri.go:89] found id: ""
	I1202 22:32:53.204976  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.204985  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:53.204993  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:53.205053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:53.232863  546345 cri.go:89] found id: ""
	I1202 22:32:53.232896  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.232905  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:53.232912  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:53.232981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:53.263355  546345 cri.go:89] found id: ""
	I1202 22:32:53.263381  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.263390  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:53.263396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:53.263454  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:53.288048  546345 cri.go:89] found id: ""
	I1202 22:32:53.288074  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.288082  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:53.288092  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:53.288103  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.343380  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:53.343416  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:53.359279  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:53.359304  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:53.426667  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:53.426690  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:53.426703  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:53.451602  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:53.451640  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:55.979195  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.989644  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:55.989738  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:56.016824  546345 cri.go:89] found id: ""
	I1202 22:32:56.016857  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.016866  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:56.016873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:56.016939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:56.061785  546345 cri.go:89] found id: ""
	I1202 22:32:56.061833  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.061846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:56.061854  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:56.061938  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:56.099312  546345 cri.go:89] found id: ""
	I1202 22:32:56.099341  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.099351  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:56.099359  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:56.099422  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:56.133179  546345 cri.go:89] found id: ""
	I1202 22:32:56.133209  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.133217  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:56.133224  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:56.133285  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:56.166397  546345 cri.go:89] found id: ""
	I1202 22:32:56.166420  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.166429  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:56.166435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:56.166493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:56.191240  546345 cri.go:89] found id: ""
	I1202 22:32:56.191300  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.191323  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:56.191343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:56.191406  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:56.219940  546345 cri.go:89] found id: ""
	I1202 22:32:56.219966  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.219975  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:56.219982  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:56.220042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:56.245089  546345 cri.go:89] found id: ""
	I1202 22:32:56.245116  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.245125  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:56.245134  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:56.245145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:56.275969  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:56.275995  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:56.330353  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:56.330388  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:56.346262  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:56.346293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:56.411285  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:56.411307  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:56.411320  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:58.937516  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.947690  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:58.947760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:58.971175  546345 cri.go:89] found id: ""
	I1202 22:32:58.971209  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.971221  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:58.971229  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:58.971289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:58.995437  546345 cri.go:89] found id: ""
	I1202 22:32:58.995465  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.995474  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:58.995481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:58.995538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:59.021289  546345 cri.go:89] found id: ""
	I1202 22:32:59.021315  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.021323  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:59.021329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:59.021388  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:59.067648  546345 cri.go:89] found id: ""
	I1202 22:32:59.067676  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.067684  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:59.067691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:59.067752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:59.120318  546345 cri.go:89] found id: ""
	I1202 22:32:59.120353  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.120362  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:59.120369  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:59.120435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:59.147811  546345 cri.go:89] found id: ""
	I1202 22:32:59.147845  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.147855  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:59.147862  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:59.147929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:59.176414  546345 cri.go:89] found id: ""
	I1202 22:32:59.176448  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.176456  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:59.176463  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:59.176534  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:59.202001  546345 cri.go:89] found id: ""
	I1202 22:32:59.202027  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.202035  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:59.202045  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:59.202056  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:59.257545  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:59.257581  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:59.273305  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:59.273385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:59.335480  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:59.335501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:59.335514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:59.359981  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:59.360017  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:01.886549  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.897148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:01.897222  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:01.923194  546345 cri.go:89] found id: ""
	I1202 22:33:01.923220  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.923229  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:01.923236  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:01.923295  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:01.947898  546345 cri.go:89] found id: ""
	I1202 22:33:01.947922  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.947930  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:01.947937  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:01.947996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:01.977128  546345 cri.go:89] found id: ""
	I1202 22:33:01.977153  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.977161  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:01.977167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:01.977226  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:02.004541  546345 cri.go:89] found id: ""
	I1202 22:33:02.004569  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.004578  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:02.004586  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:02.004660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:02.034163  546345 cri.go:89] found id: ""
	I1202 22:33:02.034189  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.034199  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:02.034206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:02.034302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:02.103583  546345 cri.go:89] found id: ""
	I1202 22:33:02.103619  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.103628  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:02.103651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:02.103732  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:02.141546  546345 cri.go:89] found id: ""
	I1202 22:33:02.141581  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.141590  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:02.141597  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:02.141672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:02.166780  546345 cri.go:89] found id: ""
	I1202 22:33:02.166805  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.166815  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:02.166824  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:02.166835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:02.191150  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:02.191186  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:02.222079  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:02.222108  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:02.279420  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:02.279453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:02.295466  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:02.295494  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:02.371035  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:04.872723  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.882988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:04.883064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:04.906907  546345 cri.go:89] found id: ""
	I1202 22:33:04.906931  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.906940  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:04.906947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:04.907006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:04.931077  546345 cri.go:89] found id: ""
	I1202 22:33:04.931102  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.931111  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:04.931119  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:04.931176  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:04.954232  546345 cri.go:89] found id: ""
	I1202 22:33:04.954258  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.954266  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:04.954273  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:04.954332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:04.978316  546345 cri.go:89] found id: ""
	I1202 22:33:04.978339  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.978347  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:04.978354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:04.978412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:05.008227  546345 cri.go:89] found id: ""
	I1202 22:33:05.008253  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.008261  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:05.008269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:05.008401  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:05.037911  546345 cri.go:89] found id: ""
	I1202 22:33:05.037948  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.037957  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:05.037964  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:05.038041  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:05.115835  546345 cri.go:89] found id: ""
	I1202 22:33:05.115860  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.115869  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:05.115876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:05.115944  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:05.142576  546345 cri.go:89] found id: ""
	I1202 22:33:05.142599  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.142608  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:05.142617  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:05.142628  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:05.172774  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:05.172802  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:05.229451  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:05.229486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:05.245158  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:05.245184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:05.308964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:05.308985  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:05.309000  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:07.834473  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.845693  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:07.845780  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:07.870140  546345 cri.go:89] found id: ""
	I1202 22:33:07.870162  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.870171  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:07.870178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:07.870238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:07.894539  546345 cri.go:89] found id: ""
	I1202 22:33:07.894562  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.894570  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:07.894583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:07.894640  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:07.918644  546345 cri.go:89] found id: ""
	I1202 22:33:07.918672  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.918681  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:07.918688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:07.918751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:07.942273  546345 cri.go:89] found id: ""
	I1202 22:33:07.942296  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.942304  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:07.942310  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:07.942367  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:07.965678  546345 cri.go:89] found id: ""
	I1202 22:33:07.965703  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.965712  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:07.965718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:07.965775  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:07.989455  546345 cri.go:89] found id: ""
	I1202 22:33:07.989480  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.989489  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:07.989496  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:07.989556  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:08.015583  546345 cri.go:89] found id: ""
	I1202 22:33:08.015608  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.015617  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:08.015624  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:08.015686  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:08.068697  546345 cri.go:89] found id: ""
	I1202 22:33:08.068724  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.068734  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:08.068745  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:08.068768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:08.112700  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:08.112750  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:08.148124  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:08.148159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:08.208343  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:08.208384  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:08.224299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:08.224331  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:08.287847  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:10.788102  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.798373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:10.798493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:10.826690  546345 cri.go:89] found id: ""
	I1202 22:33:10.826715  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.826724  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:10.826731  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:10.826791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:10.857739  546345 cri.go:89] found id: ""
	I1202 22:33:10.857765  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.857773  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:10.857780  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:10.857841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:10.886900  546345 cri.go:89] found id: ""
	I1202 22:33:10.886926  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.886935  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:10.886942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:10.887001  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:10.915788  546345 cri.go:89] found id: ""
	I1202 22:33:10.915811  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.915820  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:10.915826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:10.915883  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:10.940846  546345 cri.go:89] found id: ""
	I1202 22:33:10.940869  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.940877  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:10.940883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:10.940942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:10.969358  546345 cri.go:89] found id: ""
	I1202 22:33:10.969380  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.969389  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:10.969396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:10.969452  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:10.994365  546345 cri.go:89] found id: ""
	I1202 22:33:10.994389  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.994398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:10.994405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:10.994488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:11.021354  546345 cri.go:89] found id: ""
	I1202 22:33:11.021376  546345 logs.go:282] 0 containers: []
	W1202 22:33:11.021387  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:11.021396  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:11.021406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:11.096880  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:11.096922  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:11.115249  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:11.115286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:11.192270  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:11.192290  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:11.192305  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:11.216801  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:11.216838  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:13.747802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.758663  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:13.758739  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:13.784138  546345 cri.go:89] found id: ""
	I1202 22:33:13.784160  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.784169  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:13.784175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:13.784242  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:13.810746  546345 cri.go:89] found id: ""
	I1202 22:33:13.810768  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.810777  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:13.810783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:13.810841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:13.834531  546345 cri.go:89] found id: ""
	I1202 22:33:13.834563  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.834571  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:13.834578  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:13.834644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:13.858698  546345 cri.go:89] found id: ""
	I1202 22:33:13.858721  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.858729  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:13.858736  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:13.858798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:13.882726  546345 cri.go:89] found id: ""
	I1202 22:33:13.882749  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.882757  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:13.882764  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:13.882822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:13.908263  546345 cri.go:89] found id: ""
	I1202 22:33:13.908287  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.908296  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:13.908302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:13.908359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:13.933266  546345 cri.go:89] found id: ""
	I1202 22:33:13.933290  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.933298  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:13.933304  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:13.933361  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:13.957668  546345 cri.go:89] found id: ""
	I1202 22:33:13.957738  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.957753  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:13.957764  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:13.957776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:13.983158  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:13.983193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:14.013404  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:14.013434  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:14.076941  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:14.076982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:14.122673  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:14.122701  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:14.186208  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:16.686471  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.697167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:16.697255  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:16.721334  546345 cri.go:89] found id: ""
	I1202 22:33:16.721358  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.721367  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:16.721374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:16.721439  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:16.744849  546345 cri.go:89] found id: ""
	I1202 22:33:16.744875  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.744887  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:16.744893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:16.744950  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:16.768289  546345 cri.go:89] found id: ""
	I1202 22:33:16.768315  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.768324  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:16.768330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:16.768390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:16.793721  546345 cri.go:89] found id: ""
	I1202 22:33:16.793745  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.793754  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:16.793761  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:16.793822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:16.819397  546345 cri.go:89] found id: ""
	I1202 22:33:16.819419  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.819427  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:16.819434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:16.819493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:16.847655  546345 cri.go:89] found id: ""
	I1202 22:33:16.847682  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.847691  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:16.847699  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:16.847779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:16.872502  546345 cri.go:89] found id: ""
	I1202 22:33:16.872527  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.872535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:16.872542  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:16.872605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:16.904922  546345 cri.go:89] found id: ""
	I1202 22:33:16.904953  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.904968  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:16.904978  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:16.904990  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:16.929494  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:16.929529  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:16.960812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:16.960840  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:17.015332  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:17.015369  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:17.031163  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:17.031192  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:17.146404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.646668  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.656904  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:19.656972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:19.681366  546345 cri.go:89] found id: ""
	I1202 22:33:19.681390  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.681397  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:19.681404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:19.681462  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:19.705682  546345 cri.go:89] found id: ""
	I1202 22:33:19.705711  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.705720  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:19.705726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:19.705782  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:19.728889  546345 cri.go:89] found id: ""
	I1202 22:33:19.728913  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.728921  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:19.728928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:19.728986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:19.753177  546345 cri.go:89] found id: ""
	I1202 22:33:19.753200  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.753209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:19.753215  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:19.753275  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:19.777064  546345 cri.go:89] found id: ""
	I1202 22:33:19.777087  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.777095  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:19.777101  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:19.777165  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:19.804440  546345 cri.go:89] found id: ""
	I1202 22:33:19.804462  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.804479  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:19.804487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:19.804544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:19.831370  546345 cri.go:89] found id: ""
	I1202 22:33:19.831395  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.831403  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:19.831409  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:19.831470  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:19.854457  546345 cri.go:89] found id: ""
	I1202 22:33:19.854481  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.854489  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:19.854498  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:19.854512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:19.912020  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:19.912055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:19.927521  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:19.927549  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:19.988124  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.988188  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:19.988211  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:20.013304  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:20.013341  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:22.562705  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.573519  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:22.573597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:22.603465  546345 cri.go:89] found id: ""
	I1202 22:33:22.603541  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.603556  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:22.603564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:22.603670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:22.629949  546345 cri.go:89] found id: ""
	I1202 22:33:22.629976  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.629985  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:22.629991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:22.630051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:22.660760  546345 cri.go:89] found id: ""
	I1202 22:33:22.660785  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.660794  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:22.660801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:22.660861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:22.685501  546345 cri.go:89] found id: ""
	I1202 22:33:22.685531  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.685540  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:22.685555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:22.685618  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:22.712679  546345 cri.go:89] found id: ""
	I1202 22:33:22.712714  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.712723  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:22.712730  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:22.712799  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:22.738275  546345 cri.go:89] found id: ""
	I1202 22:33:22.738301  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.738310  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:22.738317  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:22.738437  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:22.767652  546345 cri.go:89] found id: ""
	I1202 22:33:22.767677  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.767686  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:22.767694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:22.767756  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:22.793810  546345 cri.go:89] found id: ""
	I1202 22:33:22.793836  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.793845  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:22.793854  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:22.793866  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:22.856577  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:22.856615  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:22.872185  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:22.872221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:22.937005  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:22.937039  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:22.937052  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:22.961706  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:22.961743  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.491815  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.502275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:25.502392  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:25.526647  546345 cri.go:89] found id: ""
	I1202 22:33:25.526680  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.526688  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:25.526695  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:25.526767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:25.554949  546345 cri.go:89] found id: ""
	I1202 22:33:25.554970  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.554980  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:25.554986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:25.555043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:25.578929  546345 cri.go:89] found id: ""
	I1202 22:33:25.578953  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.578962  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:25.578968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:25.579044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:25.608022  546345 cri.go:89] found id: ""
	I1202 22:33:25.608056  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.608065  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:25.608088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:25.608169  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:25.636085  546345 cri.go:89] found id: ""
	I1202 22:33:25.636120  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.636130  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:25.636153  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:25.636235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:25.666823  546345 cri.go:89] found id: ""
	I1202 22:33:25.666856  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.666865  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:25.666873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:25.666942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:25.690601  546345 cri.go:89] found id: ""
	I1202 22:33:25.690635  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.690645  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:25.690652  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:25.690723  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:25.719343  546345 cri.go:89] found id: ""
	I1202 22:33:25.719379  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.719388  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:25.719396  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:25.719408  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:25.743724  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:25.743768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.771761  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:25.771786  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:25.828678  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:25.828713  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:25.844300  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:25.844332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:25.908308  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.409045  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.420392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:28.420486  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:28.451663  546345 cri.go:89] found id: ""
	I1202 22:33:28.451687  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.451696  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:28.451704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:28.451770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:28.480763  546345 cri.go:89] found id: ""
	I1202 22:33:28.480788  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.480797  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:28.480804  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:28.480888  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:28.505757  546345 cri.go:89] found id: ""
	I1202 22:33:28.505781  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.505789  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:28.505796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:28.505882  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:28.530092  546345 cri.go:89] found id: ""
	I1202 22:33:28.530124  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.530134  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:28.530141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:28.530202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:28.555441  546345 cri.go:89] found id: ""
	I1202 22:33:28.555468  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.555477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:28.555484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:28.555542  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:28.588393  546345 cri.go:89] found id: ""
	I1202 22:33:28.588414  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.588422  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:28.588429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:28.588498  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:28.615564  546345 cri.go:89] found id: ""
	I1202 22:33:28.615586  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.615595  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:28.615602  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:28.615663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:28.640294  546345 cri.go:89] found id: ""
	I1202 22:33:28.640316  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.640324  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:28.640333  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:28.640344  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:28.670446  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:28.670473  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:28.731540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:28.731583  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:28.747338  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:28.747365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:28.807964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.807987  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:28.808001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.332523  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.349889  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:31.349961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:31.381168  546345 cri.go:89] found id: ""
	I1202 22:33:31.381196  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.381204  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:31.381211  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:31.381274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:31.408915  546345 cri.go:89] found id: ""
	I1202 22:33:31.408947  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.408956  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:31.408963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:31.409025  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:31.433408  546345 cri.go:89] found id: ""
	I1202 22:33:31.433433  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.433441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:31.433448  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:31.433506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:31.457935  546345 cri.go:89] found id: ""
	I1202 22:33:31.457968  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.457976  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:31.457983  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:31.458053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:31.481621  546345 cri.go:89] found id: ""
	I1202 22:33:31.481694  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.481704  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:31.481711  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:31.481781  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:31.505764  546345 cri.go:89] found id: ""
	I1202 22:33:31.505789  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.505799  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:31.505805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:31.505864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:31.530522  546345 cri.go:89] found id: ""
	I1202 22:33:31.530557  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.530565  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:31.530572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:31.530639  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:31.558641  546345 cri.go:89] found id: ""
	I1202 22:33:31.558706  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.558720  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:31.558731  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:31.558747  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:31.614675  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:31.614707  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:31.630252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:31.630279  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:31.695335  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:31.695359  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:31.695372  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.719979  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:31.720013  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:34.252356  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.264856  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:34.264924  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:34.303387  546345 cri.go:89] found id: ""
	I1202 22:33:34.303422  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.303437  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:34.303445  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:34.303502  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:34.377615  546345 cri.go:89] found id: ""
	I1202 22:33:34.377643  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.377665  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:34.377673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:34.377750  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:34.409336  546345 cri.go:89] found id: ""
	I1202 22:33:34.409359  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.409367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:34.409374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:34.409433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:34.434153  546345 cri.go:89] found id: ""
	I1202 22:33:34.434175  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.434184  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:34.434190  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:34.434250  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:34.459524  546345 cri.go:89] found id: ""
	I1202 22:33:34.459549  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.459558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:34.459565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:34.459622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:34.487835  546345 cri.go:89] found id: ""
	I1202 22:33:34.487862  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.487871  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:34.487878  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:34.487939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:34.511616  546345 cri.go:89] found id: ""
	I1202 22:33:34.511638  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.511647  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:34.511654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:34.511712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:34.539284  546345 cri.go:89] found id: ""
	I1202 22:33:34.539307  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.539315  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:34.539324  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:34.539335  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:34.594370  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:34.594404  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:34.610176  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:34.610203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:34.674945  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:34.674968  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:34.674980  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:34.699820  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:34.699855  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:37.235245  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.245512  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:37.245580  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:37.270720  546345 cri.go:89] found id: ""
	I1202 22:33:37.270743  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.270751  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:37.270757  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:37.270818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:37.317208  546345 cri.go:89] found id: ""
	I1202 22:33:37.317236  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.317244  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:37.317250  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:37.317357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:37.381241  546345 cri.go:89] found id: ""
	I1202 22:33:37.381304  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.381319  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:37.381331  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:37.381391  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:37.406579  546345 cri.go:89] found id: ""
	I1202 22:33:37.406604  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.406613  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:37.406620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:37.406676  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:37.431035  546345 cri.go:89] found id: ""
	I1202 22:33:37.431061  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.431071  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:37.431078  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:37.431170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:37.455450  546345 cri.go:89] found id: ""
	I1202 22:33:37.455476  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.455485  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:37.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:37.455549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:37.479696  546345 cri.go:89] found id: ""
	I1202 22:33:37.479763  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.479784  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:37.479791  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:37.479864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:37.504424  546345 cri.go:89] found id: ""
	I1202 22:33:37.504449  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.504465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:37.504475  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:37.504486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:37.562929  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:37.562965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:37.578720  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:37.578749  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:37.643738  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:37.643758  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:37.643770  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:37.669355  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:37.669389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.197629  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.209725  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:40.209798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:40.235226  546345 cri.go:89] found id: ""
	I1202 22:33:40.235249  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.235258  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:40.235265  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:40.235323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:40.264913  546345 cri.go:89] found id: ""
	I1202 22:33:40.264938  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.264948  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:40.264955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:40.265014  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:40.292266  546345 cri.go:89] found id: ""
	I1202 22:33:40.292293  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.292302  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:40.292309  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:40.292366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:40.328677  546345 cri.go:89] found id: ""
	I1202 22:33:40.328703  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.328712  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:40.328718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:40.328779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:40.372520  546345 cri.go:89] found id: ""
	I1202 22:33:40.372553  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.372562  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:40.372570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:40.372637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:40.401860  546345 cri.go:89] found id: ""
	I1202 22:33:40.401896  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.401906  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:40.401913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:40.401981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:40.426706  546345 cri.go:89] found id: ""
	I1202 22:33:40.426774  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.426790  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:40.426797  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:40.426871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:40.450845  546345 cri.go:89] found id: ""
	I1202 22:33:40.450873  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.450882  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:40.450892  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:40.450921  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:40.466330  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:40.466359  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:40.530421  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:40.530440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:40.530471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:40.557935  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:40.557971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.589359  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:40.589413  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.149757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.160459  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:43.160531  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:43.185860  546345 cri.go:89] found id: ""
	I1202 22:33:43.185885  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.185893  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:43.185900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:43.185959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:43.213745  546345 cri.go:89] found id: ""
	I1202 22:33:43.213771  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.213782  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:43.213788  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:43.213845  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:43.238763  546345 cri.go:89] found id: ""
	I1202 22:33:43.238788  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.238796  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:43.238805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:43.238865  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:43.263259  546345 cri.go:89] found id: ""
	I1202 22:33:43.263285  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.263294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:43.263301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:43.263362  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:43.287780  546345 cri.go:89] found id: ""
	I1202 22:33:43.287804  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.287812  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:43.287818  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:43.287901  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:43.333797  546345 cri.go:89] found id: ""
	I1202 22:33:43.333819  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.333827  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:43.333833  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:43.333891  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:43.379712  546345 cri.go:89] found id: ""
	I1202 22:33:43.379734  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.379743  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:43.379749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:43.379808  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:43.415160  546345 cri.go:89] found id: ""
	I1202 22:33:43.415240  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.415264  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:43.415282  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:43.415306  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:43.442448  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:43.442475  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.497169  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:43.497207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:43.513334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:43.513370  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:43.577650  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:43.577691  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:43.577704  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.104276  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:46.114696  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:46.114770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:46.143775  546345 cri.go:89] found id: ""
	I1202 22:33:46.143798  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.143806  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:46.143813  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:46.143872  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:46.168484  546345 cri.go:89] found id: ""
	I1202 22:33:46.168508  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.168517  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:46.168527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:46.168585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:46.195213  546345 cri.go:89] found id: ""
	I1202 22:33:46.195236  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.195244  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:46.195251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:46.195316  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:46.218803  546345 cri.go:89] found id: ""
	I1202 22:33:46.218825  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.218833  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:46.218840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:46.218902  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:46.242627  546345 cri.go:89] found id: ""
	I1202 22:33:46.242649  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.242657  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:46.242664  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:46.242735  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:46.268270  546345 cri.go:89] found id: ""
	I1202 22:33:46.268299  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.268314  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:46.268322  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:46.268398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:46.303449  546345 cri.go:89] found id: ""
	I1202 22:33:46.303476  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.303484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:46.303491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:46.303547  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:46.355851  546345 cri.go:89] found id: ""
	I1202 22:33:46.355877  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.355886  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:46.355895  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:46.355906  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:46.372396  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:46.372426  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:46.448683  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:46.448707  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:46.448721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.472236  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:46.472269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:46.501830  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:46.501857  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.060676  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:49.071150  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:49.071224  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:49.095927  546345 cri.go:89] found id: ""
	I1202 22:33:49.095949  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.095963  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:49.095970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:49.096027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:49.121814  546345 cri.go:89] found id: ""
	I1202 22:33:49.121837  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.121846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:49.121853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:49.121911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:49.150554  546345 cri.go:89] found id: ""
	I1202 22:33:49.150582  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.150590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:49.150596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:49.150660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:49.174636  546345 cri.go:89] found id: ""
	I1202 22:33:49.174660  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.174668  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:49.174675  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:49.174757  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:49.198993  546345 cri.go:89] found id: ""
	I1202 22:33:49.199019  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.199028  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:49.199035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:49.199122  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:49.237206  546345 cri.go:89] found id: ""
	I1202 22:33:49.237280  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.237304  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:49.237327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:49.237412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:49.262326  546345 cri.go:89] found id: ""
	I1202 22:33:49.262395  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.262418  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:49.262437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:49.262508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:49.287127  546345 cri.go:89] found id: ""
	I1202 22:33:49.287192  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.287215  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:49.287239  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:49.287269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.365279  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:49.365438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:49.383138  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:49.383164  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:49.454034  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:49.454054  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:49.454066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:49.478949  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:49.478982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.007120  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:52.018354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:52.018431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:52.048418  546345 cri.go:89] found id: ""
	I1202 22:33:52.048502  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.048527  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:52.048554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:52.048670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:52.075756  546345 cri.go:89] found id: ""
	I1202 22:33:52.075795  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.075804  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:52.075811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:52.075875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:52.102101  546345 cri.go:89] found id: ""
	I1202 22:33:52.102128  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.102138  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:52.102145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:52.102213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:52.127350  546345 cri.go:89] found id: ""
	I1202 22:33:52.127375  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.127390  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:52.127397  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:52.127461  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:52.152298  546345 cri.go:89] found id: ""
	I1202 22:33:52.152325  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.152334  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:52.152340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:52.152398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:52.176927  546345 cri.go:89] found id: ""
	I1202 22:33:52.176952  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.176960  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:52.176966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:52.177023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:52.203976  546345 cri.go:89] found id: ""
	I1202 22:33:52.204003  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.204012  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:52.204018  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:52.204077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:52.229381  546345 cri.go:89] found id: ""
	I1202 22:33:52.229408  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.229416  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:52.229425  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:52.229443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:52.292540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:52.292561  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:52.292574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:52.324946  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:52.325102  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.369542  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:52.369568  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:52.436122  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:52.436159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:54.953633  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:54.963990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:54.964062  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:54.991840  546345 cri.go:89] found id: ""
	I1202 22:33:54.991865  546345 logs.go:282] 0 containers: []
	W1202 22:33:54.991873  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:54.991880  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:54.991937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:55.024217  546345 cri.go:89] found id: ""
	I1202 22:33:55.024241  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.024250  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:55.024258  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:55.024320  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:55.048985  546345 cri.go:89] found id: ""
	I1202 22:33:55.049007  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.049015  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:55.049021  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:55.049086  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:55.073787  546345 cri.go:89] found id: ""
	I1202 22:33:55.073809  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.073818  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:55.073825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:55.073887  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:55.097827  546345 cri.go:89] found id: ""
	I1202 22:33:55.097849  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.097857  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:55.097864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:55.097929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:55.127096  546345 cri.go:89] found id: ""
	I1202 22:33:55.127119  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.127127  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:55.127135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:55.127247  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:55.155895  546345 cri.go:89] found id: ""
	I1202 22:33:55.155920  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.155929  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:55.155936  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:55.155998  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:55.184917  546345 cri.go:89] found id: ""
	I1202 22:33:55.184943  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.184951  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:55.184960  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:55.184973  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:55.245409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:55.245430  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:55.245443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:55.269272  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:55.269303  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:55.324186  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:55.324256  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:55.407948  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:55.408021  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:57.927547  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:57.938134  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:57.938208  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:57.966983  546345 cri.go:89] found id: ""
	I1202 22:33:57.967016  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.967025  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:57.967031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:57.967090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:57.990911  546345 cri.go:89] found id: ""
	I1202 22:33:57.990934  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.990942  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:57.990949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:57.991006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:58.027051  546345 cri.go:89] found id: ""
	I1202 22:33:58.027076  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.027085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:58.027091  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:58.027170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:58.052767  546345 cri.go:89] found id: ""
	I1202 22:33:58.052791  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.052801  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:58.052808  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:58.052866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:58.077589  546345 cri.go:89] found id: ""
	I1202 22:33:58.077616  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.077626  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:58.077634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:58.077736  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:58.102352  546345 cri.go:89] found id: ""
	I1202 22:33:58.102377  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.102385  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:58.102394  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:58.102453  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:58.127151  546345 cri.go:89] found id: ""
	I1202 22:33:58.127174  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.127183  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:58.127203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:58.127264  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:58.153068  546345 cri.go:89] found id: ""
	I1202 22:33:58.153097  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.153106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:58.153116  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:58.153128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:58.207341  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:58.207375  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:58.223908  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:58.223993  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:58.303303  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:58.303374  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:58.303401  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:58.339284  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:58.339358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:00.884684  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:00.894955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:00.895043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:00.919607  546345 cri.go:89] found id: ""
	I1202 22:34:00.919638  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.919648  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:00.919655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:00.919714  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:00.943845  546345 cri.go:89] found id: ""
	I1202 22:34:00.943869  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.943877  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:00.943883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:00.943942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:00.969291  546345 cri.go:89] found id: ""
	I1202 22:34:00.969316  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.969325  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:00.969332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:00.969387  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:00.998170  546345 cri.go:89] found id: ""
	I1202 22:34:00.998194  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.998203  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:00.998210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:00.998267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:01.028082  546345 cri.go:89] found id: ""
	I1202 22:34:01.028108  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.028118  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:01.028125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:01.028182  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:01.052163  546345 cri.go:89] found id: ""
	I1202 22:34:01.052190  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.052198  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:01.052204  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:01.052261  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:01.079605  546345 cri.go:89] found id: ""
	I1202 22:34:01.079638  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.079648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:01.079655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:01.079727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:01.104672  546345 cri.go:89] found id: ""
	I1202 22:34:01.104697  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.104705  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:01.104714  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:01.104727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:01.168637  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:01.168689  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:01.186088  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:01.186120  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:01.254373  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:01.254405  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:01.254421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:01.279534  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:01.279570  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.844056  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:03.854485  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:03.854559  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:03.883518  546345 cri.go:89] found id: ""
	I1202 22:34:03.883539  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.883547  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:03.883555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:03.883616  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:03.907609  546345 cri.go:89] found id: ""
	I1202 22:34:03.907634  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.907643  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:03.907650  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:03.907708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:03.931661  546345 cri.go:89] found id: ""
	I1202 22:34:03.931686  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.931694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:03.931701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:03.931762  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:03.956212  546345 cri.go:89] found id: ""
	I1202 22:34:03.956236  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.956245  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:03.956252  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:03.956310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:03.982858  546345 cri.go:89] found id: ""
	I1202 22:34:03.982882  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.982890  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:03.982899  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:03.982955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:04.008609  546345 cri.go:89] found id: ""
	I1202 22:34:04.008637  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.008646  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:04.008654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:04.008718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:04.034395  546345 cri.go:89] found id: ""
	I1202 22:34:04.034426  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.034436  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:04.034443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:04.034503  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:04.059450  546345 cri.go:89] found id: ""
	I1202 22:34:04.059474  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.059482  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:04.059492  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:04.059503  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:04.116204  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:04.116237  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:04.131753  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:04.131779  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:04.195398  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:04.195417  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:04.195431  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:04.220265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:04.220302  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:06.748017  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:06.758416  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:06.758487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:06.786850  546345 cri.go:89] found id: ""
	I1202 22:34:06.786877  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.786886  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:06.786893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:06.786958  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:06.811248  546345 cri.go:89] found id: ""
	I1202 22:34:06.811274  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.811283  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:06.811290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:06.811352  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:06.835885  546345 cri.go:89] found id: ""
	I1202 22:34:06.835911  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.835920  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:06.835927  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:06.835986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:06.861031  546345 cri.go:89] found id: ""
	I1202 22:34:06.861057  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.861066  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:06.861076  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:06.861137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:06.885492  546345 cri.go:89] found id: ""
	I1202 22:34:06.885518  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.885526  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:06.885533  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:06.885621  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:06.911207  546345 cri.go:89] found id: ""
	I1202 22:34:06.911233  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.911242  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:06.911249  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:06.911307  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:06.936761  546345 cri.go:89] found id: ""
	I1202 22:34:06.936786  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.936794  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:06.936801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:06.936858  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:06.961200  546345 cri.go:89] found id: ""
	I1202 22:34:06.961225  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.961233  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:06.961242  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:06.961253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:07.017396  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:07.017432  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:07.033140  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:07.033220  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:07.098724  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:07.098749  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:07.098764  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:07.123278  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:07.123313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:09.654822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:09.666550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:09.666631  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:09.696479  546345 cri.go:89] found id: ""
	I1202 22:34:09.696501  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.696510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:09.696516  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:09.696573  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:09.720695  546345 cri.go:89] found id: ""
	I1202 22:34:09.720717  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.720725  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:09.720732  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:09.720789  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:09.743340  546345 cri.go:89] found id: ""
	I1202 22:34:09.743366  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.743374  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:09.743381  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:09.743441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:09.771827  546345 cri.go:89] found id: ""
	I1202 22:34:09.771851  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.771859  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:09.771866  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:09.771942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:09.800440  546345 cri.go:89] found id: ""
	I1202 22:34:09.800511  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.800522  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:09.800529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:09.800599  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:09.827898  546345 cri.go:89] found id: ""
	I1202 22:34:09.827933  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.827942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:09.827949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:09.828053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:09.851874  546345 cri.go:89] found id: ""
	I1202 22:34:09.851909  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.851918  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:09.851925  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:09.852023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:09.876063  546345 cri.go:89] found id: ""
	I1202 22:34:09.876098  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.876106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:09.876136  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:09.876157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:09.931102  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:09.931140  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:09.947006  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:09.947033  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:10.016167  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:10.016189  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:10.016202  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:10.042713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:10.042746  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.574841  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:12.602704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:12.602776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:12.630259  546345 cri.go:89] found id: ""
	I1202 22:34:12.630283  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.630291  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:12.630298  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:12.630356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:12.653540  546345 cri.go:89] found id: ""
	I1202 22:34:12.653571  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.653580  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:12.653587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:12.653726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:12.678660  546345 cri.go:89] found id: ""
	I1202 22:34:12.678685  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.678694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:12.678701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:12.678761  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:12.702118  546345 cri.go:89] found id: ""
	I1202 22:34:12.702147  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.702155  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:12.702162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:12.702262  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:12.729590  546345 cri.go:89] found id: ""
	I1202 22:34:12.729615  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.729624  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:12.729631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:12.729713  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:12.755560  546345 cri.go:89] found id: ""
	I1202 22:34:12.755586  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.755594  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:12.755601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:12.755656  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:12.788269  546345 cri.go:89] found id: ""
	I1202 22:34:12.788293  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.788302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:12.788308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:12.788366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:12.812214  546345 cri.go:89] found id: ""
	I1202 22:34:12.812239  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.812248  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:12.812257  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:12.812268  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.841941  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:12.841966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:12.896188  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:12.896219  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:12.911694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:12.911721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:12.975342  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:12.975377  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:12.975389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.502887  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:15.513338  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:15.513418  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:15.536875  546345 cri.go:89] found id: ""
	I1202 22:34:15.536897  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.536905  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:15.536911  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:15.536970  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:15.573309  546345 cri.go:89] found id: ""
	I1202 22:34:15.573335  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.573360  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:15.573368  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:15.573433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:15.623126  546345 cri.go:89] found id: ""
	I1202 22:34:15.623149  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.623157  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:15.623164  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:15.623221  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:15.657458  546345 cri.go:89] found id: ""
	I1202 22:34:15.657484  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.657493  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:15.657500  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:15.657568  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:15.681354  546345 cri.go:89] found id: ""
	I1202 22:34:15.681380  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.681389  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:15.681395  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:15.681456  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:15.705775  546345 cri.go:89] found id: ""
	I1202 22:34:15.705848  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.705874  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:15.705894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:15.705971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:15.731425  546345 cri.go:89] found id: ""
	I1202 22:34:15.731448  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.731457  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:15.731464  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:15.731521  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:15.755658  546345 cri.go:89] found id: ""
	I1202 22:34:15.755682  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.755690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:15.755699  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:15.755711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:15.811079  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:15.811113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:15.827246  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:15.827272  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:15.889878  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:15.889899  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:15.889912  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.915317  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:15.915350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.445059  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:18.458773  546345 out.go:203] 
	W1202 22:34:18.461733  546345 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1202 22:34:18.461774  546345 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1202 22:34:18.461784  546345 out.go:285] * Related issues:
	* Related issues:
	W1202 22:34:18.461797  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1202 22:34:18.461818  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1202 22:34:18.464650  546345 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250247
helpers_test.go:243: (dbg) docker inspect newest-cni-250247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	        "Created": "2025-12-02T22:17:45.695373395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 546476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:28:10.516417593Z",
	            "FinishedAt": "2025-12-02T22:28:08.91957983Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2-json.log",
	        "Name": "/newest-cni-250247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	                "LowerDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250247",
	                "Source": "/var/lib/docker/volumes/newest-cni-250247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250247",
	                "name.minikube.sigs.k8s.io": "newest-cni-250247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19a1ca374f2ac15ceeb8732ad47e7e4e789db7b4dc20ead5353b14dfc8ce4376",
	            "SandboxKey": "/var/run/docker/netns/19a1ca374f2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:22:6b:2b:a3:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfffc9981d9cab6ce5981c2e79bfb0dd15ae8455b64d0bfc795000bbbe273d91",
	                    "EndpointID": "6077ce03ce851ef49c2205e3affa2e3c9a93685b0b2e5a16a743470850763606",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250247",
	                        "8d631b193c97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (356.155991ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25: (1.565492409s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ image   │ embed-certs-716386 image list --format=json                                                                                                                                                                                                                │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ pause   │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ unpause │ -p embed-certs-716386 --alsologtostderr -v=1                                                                                                                                                                                                               │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:26 UTC │                     │
	│ stop    │ -p newest-cni-250247 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-250247 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:28:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:28:09.982860  546345 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:09.982990  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983001  546345 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:09.983006  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983258  546345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:28:09.983629  546345 out.go:368] Setting JSON to false
	I1202 22:28:09.984474  546345 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15028,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:28:09.984540  546345 start.go:143] virtualization:  
	I1202 22:28:09.987326  546345 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:28:09.991071  546345 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:28:09.991190  546345 notify.go:221] Checking for updates...
	I1202 22:28:09.996957  546345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:28:09.999951  546345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:10.003165  546345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:28:10.010024  546345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:28:10.023215  546345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:28:10.026934  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:10.027740  546345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:28:10.065520  546345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:28:10.065629  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.146197  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.137008488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.146302  546345 docker.go:319] overlay module found
	I1202 22:28:10.149701  546345 out.go:179] * Using the docker driver based on existing profile
	I1202 22:28:10.152553  546345 start.go:309] selected driver: docker
	I1202 22:28:10.152579  546345 start.go:927] validating driver "docker" against &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.152714  546345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:28:10.153449  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.206765  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.197797072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.207092  546345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:28:10.207126  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:10.207191  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:10.207234  546345 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.210373  546345 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:28:10.213164  546345 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:28:10.216139  546345 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:28:10.218905  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:10.218974  546345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:28:10.241012  546345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:28:10.241034  546345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:28:10.277912  546345 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:28:10.461684  546345 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:28:10.461922  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.461950  546345 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462038  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:28:10.462049  546345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.248µs
	I1202 22:28:10.462062  546345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:28:10.462074  546345 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462104  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:28:10.462109  546345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.282µs
	I1202 22:28:10.462115  546345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462125  546345 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462157  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:28:10.462162  546345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.727µs
	I1202 22:28:10.462169  546345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462179  546345 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462196  546345 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:28:10.462206  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:28:10.462212  546345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.534µs
	I1202 22:28:10.462218  546345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462227  546345 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462237  546345 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462253  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:28:10.462258  546345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.098µs
	I1202 22:28:10.462265  546345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462274  546345 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462280  546345 start.go:364] duration metric: took 29.16µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:28:10.462305  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:28:10.462305  546345 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:28:10.462319  546345 fix.go:54] fixHost starting: 
	I1202 22:28:10.462321  546345 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462350  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:28:10.462360  546345 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.731µs
	I1202 22:28:10.462365  546345 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:28:10.462378  546345 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462404  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:28:10.462408  546345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.396µs
	I1202 22:28:10.462414  546345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:28:10.462311  546345 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.21µs
	I1202 22:28:10.462504  546345 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:28:10.462515  546345 cache.go:87] Successfully saved all images to host disk.
	I1202 22:28:10.462628  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.483660  546345 fix.go:112] recreateIfNeeded on newest-cni-250247: state=Stopped err=<nil>
	W1202 22:28:10.483692  546345 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 22:28:08.293846  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:10.294170  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:10.487123  546345 out.go:252] * Restarting existing docker container for "newest-cni-250247" ...
	I1202 22:28:10.487212  546345 cli_runner.go:164] Run: docker start newest-cni-250247
	I1202 22:28:10.752920  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.774107  546345 kic.go:430] container "newest-cni-250247" state is running.
	I1202 22:28:10.775430  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:10.803310  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.803660  546345 machine.go:94] provisionDockerMachine start ...
	I1202 22:28:10.803741  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:10.835254  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:10.835574  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:10.835582  546345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:28:10.836341  546345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47630->127.0.0.1:33423: read: connection reset by peer
	I1202 22:28:13.985241  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:13.985267  546345 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:28:13.985331  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.004448  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.004830  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.004852  546345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:28:14.162890  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:14.162970  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.180049  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.180364  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.180385  546345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:28:14.325738  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:28:14.325762  546345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:28:14.325781  546345 ubuntu.go:190] setting up certificates
	I1202 22:28:14.325790  546345 provision.go:84] configureAuth start
	I1202 22:28:14.325861  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:14.342936  546345 provision.go:143] copyHostCerts
	I1202 22:28:14.343009  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:28:14.343017  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:28:14.343091  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:28:14.343188  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:28:14.343193  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:28:14.343217  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:28:14.343264  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:28:14.343269  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:28:14.343292  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:28:14.343342  546345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:28:14.770203  546345 provision.go:177] copyRemoteCerts
	I1202 22:28:14.770270  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:28:14.770310  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.787300  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:14.893004  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:28:14.909339  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:28:14.926255  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:28:14.942726  546345 provision.go:87] duration metric: took 616.921074ms to configureAuth
	I1202 22:28:14.942753  546345 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:28:14.942983  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:14.942996  546345 machine.go:97] duration metric: took 4.139308859s to provisionDockerMachine
	I1202 22:28:14.943006  546345 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:28:14.943017  546345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:28:14.943072  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:28:14.943129  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.960329  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.069600  546345 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:28:15.072888  546345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:28:15.072916  546345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:28:15.072928  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:28:15.073008  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:28:15.073125  546345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:28:15.073236  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:28:15.080571  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:15.098287  546345 start.go:296] duration metric: took 155.265122ms for postStartSetup
	I1202 22:28:15.098433  546345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:28:15.098514  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.116407  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.218632  546345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:28:15.223330  546345 fix.go:56] duration metric: took 4.761004698s for fixHost
	I1202 22:28:15.223357  546345 start.go:83] releasing machines lock for "newest-cni-250247", held for 4.761068204s
	I1202 22:28:15.223423  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:15.240165  546345 ssh_runner.go:195] Run: cat /version.json
	I1202 22:28:15.240226  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.240474  546345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:28:15.240537  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.266111  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.266672  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.465947  546345 ssh_runner.go:195] Run: systemctl --version
	I1202 22:28:15.472302  546345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:28:15.476459  546345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:28:15.476528  546345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:28:15.484047  546345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:28:15.484071  546345 start.go:496] detecting cgroup driver to use...
	I1202 22:28:15.484132  546345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:28:15.484196  546345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:28:15.501336  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:28:15.514809  546345 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:28:15.514870  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:28:15.529978  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:28:15.542949  546345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:28:15.646754  546345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:28:15.763470  546345 docker.go:234] disabling docker service ...
	I1202 22:28:15.763534  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:28:15.778139  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:28:15.790687  546345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:28:15.899099  546345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:28:16.013695  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:28:16.027166  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:28:16.044232  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:28:16.054377  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:28:16.064256  546345 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:28:16.064370  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:28:16.074182  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.083929  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:28:16.093428  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.103465  546345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:28:16.111974  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:28:16.120391  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:28:16.129324  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:28:16.138640  546345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:28:16.146079  546345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:28:16.153383  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.258631  546345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:28:16.349094  546345 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:28:16.349206  546345 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:28:16.353088  546345 start.go:564] Will wait 60s for crictl version
	I1202 22:28:16.353236  546345 ssh_runner.go:195] Run: which crictl
	I1202 22:28:16.356669  546345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:28:16.382942  546345 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:28:16.383050  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.402826  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.429935  546345 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:28:16.432731  546345 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:28:16.448989  546345 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:28:16.452808  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.464968  546345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 22:28:12.794132  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:15.294790  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:16.467854  546345 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:28:16.468035  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:16.468117  546345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:28:16.491782  546345 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:28:16.491805  546345 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:28:16.491813  546345 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:28:16.491914  546345 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:28:16.491984  546345 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:28:16.515416  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:16.515440  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:16.515457  546345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:28:16.515491  546345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:28:16.515606  546345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:28:16.515677  546345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:28:16.522844  546345 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:28:16.522912  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:28:16.529836  546345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:28:16.541819  546345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:28:16.553461  546345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:28:16.565531  546345 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:28:16.569041  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.578309  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.682927  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:16.699616  546345 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:28:16.699641  546345 certs.go:195] generating shared ca certs ...
	I1202 22:28:16.699658  546345 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:16.699787  546345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:28:16.699846  546345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:28:16.699857  546345 certs.go:257] generating profile certs ...
	I1202 22:28:16.699953  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:28:16.700029  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:28:16.700095  546345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:28:16.700208  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:28:16.700249  546345 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:28:16.700262  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:28:16.700295  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:28:16.700323  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:28:16.700356  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:28:16.700412  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:16.701077  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:28:16.721941  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:28:16.740644  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:28:16.759568  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:28:16.776264  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:28:16.794239  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:28:16.814293  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:28:16.833481  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:28:16.852733  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:28:16.870078  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:28:16.886149  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:28:16.902507  546345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:28:16.913942  546345 ssh_runner.go:195] Run: openssl version
	I1202 22:28:16.919938  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:28:16.927825  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931606  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931675  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.974237  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:28:16.981828  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:28:16.989638  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.992999  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.993061  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:28:17.033731  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:28:17.041307  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:28:17.049114  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052710  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052816  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.093368  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:28:17.101039  546345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:28:17.104530  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:28:17.145234  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:28:17.186252  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:28:17.227251  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:28:17.270184  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:28:17.315680  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:28:17.356357  546345 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:17.356449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:28:17.356551  546345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:28:17.384974  546345 cri.go:89] found id: ""
	I1202 22:28:17.385084  546345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:28:17.392914  546345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:28:17.392983  546345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:28:17.393055  546345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:28:17.400365  546345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:28:17.400969  546345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.401222  546345 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250247" cluster setting kubeconfig missing "newest-cni-250247" context setting]
	I1202 22:28:17.401752  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.403065  546345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:28:17.410696  546345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:28:17.410762  546345 kubeadm.go:602] duration metric: took 17.7594ms to restartPrimaryControlPlane
	I1202 22:28:17.410793  546345 kubeadm.go:403] duration metric: took 54.438388ms to StartCluster
	I1202 22:28:17.410829  546345 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.410902  546345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.412749  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.413013  546345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:28:17.416416  546345 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:28:17.416535  546345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-250247"
	I1202 22:28:17.416566  546345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-250247"
	I1202 22:28:17.416596  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:17.416607  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.416873  546345 addons.go:70] Setting dashboard=true in profile "newest-cni-250247"
	I1202 22:28:17.416893  546345 addons.go:239] Setting addon dashboard=true in "newest-cni-250247"
	W1202 22:28:17.416900  546345 addons.go:248] addon dashboard should already be in state true
	I1202 22:28:17.416923  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.417319  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.417762  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.418220  546345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-250247"
	I1202 22:28:17.418240  546345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250247"
	I1202 22:28:17.418515  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.421722  546345 out.go:179] * Verifying Kubernetes components...
	I1202 22:28:17.424546  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:17.473567  546345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:28:17.473567  546345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:28:17.475110  546345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-250247"
	I1202 22:28:17.475145  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.475548  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.477614  546345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.477633  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:28:17.477833  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.481801  546345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:28:17.489727  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:28:17.489757  546345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:28:17.489831  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.519689  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.519729  546345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.519742  546345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:28:17.519796  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.551180  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.565506  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.644850  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:17.726531  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.763912  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.792014  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:28:17.792042  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:28:17.824225  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:28:17.824250  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:28:17.838468  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:28:17.838492  546345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:28:17.851940  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:28:17.851965  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:28:17.864211  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:28:17.864276  546345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:28:17.876057  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:28:17.876079  546345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:28:17.887797  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:28:17.887867  546345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:28:17.899526  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:28:17.899547  546345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:28:17.911602  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:17.911626  546345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:28:17.923996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:18.303299  546345 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:28:18.303418  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:18.303565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303612  546345 retry.go:31] will retry after 133.710161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303717  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303748  546345 retry.go:31] will retry after 138.021594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303974  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.304008  546345 retry.go:31] will retry after 237.208538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.438371  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:18.442705  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:18.512074  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.512108  546345 retry.go:31] will retry after 489.996663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.521184  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.521218  546345 retry.go:31] will retry after 506.041741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.542348  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:18.605737  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.605775  546345 retry.go:31] will retry after 347.613617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.804191  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:18.953629  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:19.003207  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.021755  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.021793  546345 retry.go:31] will retry after 285.211473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.028084  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:19.152805  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.152839  546345 retry.go:31] will retry after 301.33995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:19.169007  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.169038  546345 retry.go:31] will retry after 787.522923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.304323  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.307756  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:19.364720  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.364752  546345 retry.go:31] will retry after 744.498002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.454779  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.514605  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.514684  546345 retry.go:31] will retry after 936.080491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.803793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.957439  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:17.793953  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.293990  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.022370  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.022406  546345 retry.go:31] will retry after 798.963887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.109555  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:20.176777  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.176873  546345 retry.go:31] will retry after 799.677911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.303906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.451319  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:20.513056  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.513087  546345 retry.go:31] will retry after 774.001274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.804493  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.822263  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.884574  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.884663  546345 retry.go:31] will retry after 1.794003449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.976884  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:21.043200  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.043233  546345 retry.go:31] will retry after 2.577364105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.287368  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:21.303812  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:21.396263  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.396297  546345 retry.go:31] will retry after 1.406655136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.803778  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.303682  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.678940  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.734117  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.734151  546345 retry.go:31] will retry after 2.241021271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.803453  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:22.803660  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:22.908987  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.909065  546345 retry.go:31] will retry after 2.592452064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.304587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:23.621298  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:23.681960  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.681992  546345 retry.go:31] will retry after 4.002263162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.804126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.303637  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.803614  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.976147  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.793981  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.036436  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.036470  546345 retry.go:31] will retry after 3.520246776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.303592  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:25.502542  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:25.567000  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.567033  546345 retry.go:31] will retry after 5.323254411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.804224  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.304369  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.303952  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.684919  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:27.748186  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.748220  546345 retry.go:31] will retry after 5.733866836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.804400  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.304209  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.556915  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:28.614437  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.614469  546345 retry.go:31] will retry after 5.59146354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.803555  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.303563  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.803564  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:27.794055  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:29.794270  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:32.293942  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:30.304278  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.891315  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:30.954133  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:30.954165  546345 retry.go:31] will retry after 6.008326018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:31.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:31.803766  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.304456  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.804272  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.304447  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.482755  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:33.544609  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.544640  546345 retry.go:31] will retry after 5.236447557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.804125  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.206989  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:34.267528  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.267562  546345 retry.go:31] will retry after 5.128568146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.804011  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:34.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:36.794018  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:35.304181  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.803881  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.304159  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.804539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.963637  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:37.037814  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.037848  546345 retry.go:31] will retry after 8.195284378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.304208  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:37.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.303552  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.781347  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:38.803757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:38.846454  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:38.846487  546345 retry.go:31] will retry after 10.92120738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.304100  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:39.396834  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:39.454859  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.454893  546345 retry.go:31] will retry after 6.04045657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.804469  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:39.293843  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:41.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:40.303596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.804541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.303922  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.803906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.304508  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.804313  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.304463  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.803539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.304169  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.803620  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:43.294289  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:45.294597  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:47.294896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:45.235996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:45.303907  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:45.410878  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.410909  546345 retry.go:31] will retry after 9.368309576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.496112  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:45.553672  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.553705  546345 retry.go:31] will retry after 7.750202952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.804015  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.303559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.804327  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.303603  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.804053  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.303550  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.803634  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.303688  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.768489  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:49.804064  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:49.895914  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:49.895948  546345 retry.go:31] will retry after 11.070404971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:49.794091  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:51.794902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:50.304462  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:50.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.304256  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.804118  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.304451  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.804096  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.303837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.304041  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:53.361880  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.361915  546345 retry.go:31] will retry after 21.51867829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.804496  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.303718  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.779367  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:54.803837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:54.852160  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:54.852195  546345 retry.go:31] will retry after 25.514460464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:54.293970  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:56.294081  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:55.303807  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:55.804288  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.304329  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.803616  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.303836  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.804152  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.304034  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.803992  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.304109  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.804084  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:58.793961  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:01.293995  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:00.305594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.803492  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.967275  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:01.023919  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.023952  546345 retry.go:31] will retry after 14.799716379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.304168  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:01.804346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.304261  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.803541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.304078  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.804260  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.304145  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:03.793972  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:05.794096  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:05.304303  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.804290  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.304157  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.804297  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.304486  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.803594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.303514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.803514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.304264  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.804046  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:08.294013  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:10.794007  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:10.304151  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.304108  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.803600  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.304520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.804189  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.304155  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.803517  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.304548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.803761  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.881559  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:14.937730  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:14.937760  546345 retry.go:31] will retry after 41.941175985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:13.294025  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:15.301168  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:15.316948  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.804548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.823888  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:15.884943  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.884976  546345 retry.go:31] will retry after 35.611848449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:16.303570  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:16.803687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.304005  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.804234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:17.804335  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:17.829227  546345 cri.go:89] found id: ""
	I1202 22:29:17.829257  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.829265  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:17.829272  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:17.829332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:17.853121  546345 cri.go:89] found id: ""
	I1202 22:29:17.853146  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.853154  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:17.853161  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:17.853219  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:17.877170  546345 cri.go:89] found id: ""
	I1202 22:29:17.877195  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.877204  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:17.877210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:17.877267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:17.904673  546345 cri.go:89] found id: ""
	I1202 22:29:17.904698  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.904707  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:17.904717  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:17.904784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:17.928244  546345 cri.go:89] found id: ""
	I1202 22:29:17.928284  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.928294  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:17.928301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:17.928363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:17.951262  546345 cri.go:89] found id: ""
	I1202 22:29:17.951283  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.951292  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:17.951299  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:17.951363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:17.979941  546345 cri.go:89] found id: ""
	I1202 22:29:17.979971  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.979980  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:17.979987  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:17.980046  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:18.014330  546345 cri.go:89] found id: ""
	I1202 22:29:18.014352  546345 logs.go:282] 0 containers: []
	W1202 22:29:18.014361  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:18.014370  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:18.014382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:18.070623  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:18.070659  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:18.086453  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:18.086483  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:18.147206  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:18.147229  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:18.147242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:18.171557  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:18.171592  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:17.794066  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:20.293905  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:22.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:20.367703  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:29:20.422565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.422597  546345 retry.go:31] will retry after 40.968515426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.701050  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:20.711132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:20.711213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:20.734019  546345 cri.go:89] found id: ""
	I1202 22:29:20.734042  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.734050  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:20.734057  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:20.734114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:20.756521  546345 cri.go:89] found id: ""
	I1202 22:29:20.756546  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.756554  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:20.756561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:20.756620  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:20.787826  546345 cri.go:89] found id: ""
	I1202 22:29:20.787852  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.787869  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:20.787876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:20.787939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:20.811402  546345 cri.go:89] found id: ""
	I1202 22:29:20.811427  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.811435  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:20.811441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:20.811500  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:20.835289  546345 cri.go:89] found id: ""
	I1202 22:29:20.835314  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.835322  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:20.835329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:20.835404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:20.858522  546345 cri.go:89] found id: ""
	I1202 22:29:20.858548  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.858556  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:20.858563  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:20.858622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:20.883759  546345 cri.go:89] found id: ""
	I1202 22:29:20.883783  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.883791  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:20.883798  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:20.883857  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:20.907968  546345 cri.go:89] found id: ""
	I1202 22:29:20.907992  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.908001  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:20.908010  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:20.908020  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:20.962992  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:20.963028  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:20.978472  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:20.978499  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:21.039749  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:21.039771  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:21.039784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:21.064157  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:21.064194  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:23.595745  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:23.606920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:23.606996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:23.633420  546345 cri.go:89] found id: ""
	I1202 22:29:23.633450  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.633459  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:23.633473  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:23.633532  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:23.659559  546345 cri.go:89] found id: ""
	I1202 22:29:23.659581  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.659590  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:23.659596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:23.659663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:23.684986  546345 cri.go:89] found id: ""
	I1202 22:29:23.685010  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.685031  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:23.685039  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:23.685099  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:23.709487  546345 cri.go:89] found id: ""
	I1202 22:29:23.709560  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.709583  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:23.709604  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:23.709734  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:23.734133  546345 cri.go:89] found id: ""
	I1202 22:29:23.734159  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.734167  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:23.734173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:23.734233  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:23.758126  546345 cri.go:89] found id: ""
	I1202 22:29:23.758190  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.758213  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:23.758234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:23.758327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:23.782448  546345 cri.go:89] found id: ""
	I1202 22:29:23.782471  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.782480  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:23.782505  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:23.782579  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:23.806736  546345 cri.go:89] found id: ""
	I1202 22:29:23.806761  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.806770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:23.806780  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:23.806790  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:23.865578  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:23.865619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:23.881434  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:23.881470  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:23.944584  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:23.944606  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:23.944619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:23.970159  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:23.970207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:24.793885  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:26.794021  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:26.498138  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:26.508783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:26.508852  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:26.537015  546345 cri.go:89] found id: ""
	I1202 22:29:26.537037  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.537046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:26.537053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:26.537110  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:26.574312  546345 cri.go:89] found id: ""
	I1202 22:29:26.574339  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.574347  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:26.574354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:26.574411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:26.629052  546345 cri.go:89] found id: ""
	I1202 22:29:26.629079  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.629087  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:26.629094  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:26.629150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:26.658217  546345 cri.go:89] found id: ""
	I1202 22:29:26.658251  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.658259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:26.658266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:26.658337  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:26.681717  546345 cri.go:89] found id: ""
	I1202 22:29:26.681751  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.681760  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:26.681778  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:26.681850  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:26.704611  546345 cri.go:89] found id: ""
	I1202 22:29:26.704646  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.704655  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:26.704661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:26.704733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:26.728028  546345 cri.go:89] found id: ""
	I1202 22:29:26.728091  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.728115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:26.728137  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:26.728223  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:26.755557  546345 cri.go:89] found id: ""
	I1202 22:29:26.755582  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.755590  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:26.755600  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:26.755611  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.786053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:26.786080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:26.841068  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:26.841100  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:26.856799  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:26.856829  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:26.924274  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:26.924338  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:26.924358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.449918  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:29.460186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:29.460259  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:29.483893  546345 cri.go:89] found id: ""
	I1202 22:29:29.483915  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.483924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:29.483930  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:29.483990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:29.507973  546345 cri.go:89] found id: ""
	I1202 22:29:29.507999  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.508007  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:29.508013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:29.508073  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:29.532020  546345 cri.go:89] found id: ""
	I1202 22:29:29.532045  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.532054  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:29.532061  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:29.532119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:29.583563  546345 cri.go:89] found id: ""
	I1202 22:29:29.583590  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.583599  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:29.583606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:29.583664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:29.626796  546345 cri.go:89] found id: ""
	I1202 22:29:29.626821  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.626830  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:29.626837  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:29.626910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:29.650151  546345 cri.go:89] found id: ""
	I1202 22:29:29.650179  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.650186  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:29.650193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:29.650254  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:29.677989  546345 cri.go:89] found id: ""
	I1202 22:29:29.678015  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.678023  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:29.678031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:29.678090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:29.707431  546345 cri.go:89] found id: ""
	I1202 22:29:29.707457  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.707465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:29.707475  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:29.707486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:29.773447  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:29.773470  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:29.773484  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.798530  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:29.798604  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:29.825490  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:29.825517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:29.884423  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:29.884461  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1202 22:29:28.794762  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:30.793709  539599 node_ready.go:38] duration metric: took 6m0.000289785s for node "no-preload-904303" to be "Ready" ...
	I1202 22:29:30.796935  539599 out.go:203] 
	W1202 22:29:30.799794  539599 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 22:29:30.799816  539599 out.go:285] * 
	W1202 22:29:30.802151  539599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:29:30.804961  539599 out.go:203] 
	I1202 22:29:32.401788  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:32.413697  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:32.413768  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:32.447463  546345 cri.go:89] found id: ""
	I1202 22:29:32.447486  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.447494  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:32.447501  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:32.447560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:32.480451  546345 cri.go:89] found id: ""
	I1202 22:29:32.480473  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.480481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:32.480487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:32.480543  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:32.518559  546345 cri.go:89] found id: ""
	I1202 22:29:32.518581  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.518590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:32.518596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:32.518652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:32.570716  546345 cri.go:89] found id: ""
	I1202 22:29:32.570737  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.570746  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:32.570752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:32.570809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:32.612686  546345 cri.go:89] found id: ""
	I1202 22:29:32.612722  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.612731  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:32.612738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:32.612797  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:32.651570  546345 cri.go:89] found id: ""
	I1202 22:29:32.651592  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.651600  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:32.651607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:32.651671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:32.679451  546345 cri.go:89] found id: ""
	I1202 22:29:32.679475  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.679484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:32.679490  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:32.679552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:32.705124  546345 cri.go:89] found id: ""
	I1202 22:29:32.705149  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.705170  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:32.705180  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:32.705193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:32.772557  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:32.772578  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:32.772590  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:32.798210  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:32.798246  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:32.826270  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:32.826298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:32.885460  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:32.885496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:35.401743  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:35.412979  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:35.413051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:35.438650  546345 cri.go:89] found id: ""
	I1202 22:29:35.438684  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.438703  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:35.438710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:35.438787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:35.467326  546345 cri.go:89] found id: ""
	I1202 22:29:35.467350  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.467358  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:35.467365  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:35.467444  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:35.492513  546345 cri.go:89] found id: ""
	I1202 22:29:35.492546  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.492554  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:35.492561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:35.492659  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:35.517758  546345 cri.go:89] found id: ""
	I1202 22:29:35.517785  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.517794  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:35.517801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:35.517861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:35.564303  546345 cri.go:89] found id: ""
	I1202 22:29:35.564329  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.564338  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:35.564345  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:35.564431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:35.610173  546345 cri.go:89] found id: ""
	I1202 22:29:35.610253  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.610289  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:35.610311  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:35.610412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:35.647481  546345 cri.go:89] found id: ""
	I1202 22:29:35.647545  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.647560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:35.647567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:35.647628  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:35.671535  546345 cri.go:89] found id: ""
	I1202 22:29:35.671561  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.671569  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:35.671579  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:35.671591  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:35.736069  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:35.736092  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:35.736106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:35.760759  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:35.760794  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:35.786652  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:35.786678  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:35.842999  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:35.843035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.358963  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:38.369060  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:38.369123  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:38.400304  546345 cri.go:89] found id: ""
	I1202 22:29:38.400330  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.400339  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:38.400351  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:38.400407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:38.424847  546345 cri.go:89] found id: ""
	I1202 22:29:38.424873  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.424881  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:38.424888  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:38.424946  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:38.452445  546345 cri.go:89] found id: ""
	I1202 22:29:38.452472  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.452481  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:38.452487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:38.452544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:38.480761  546345 cri.go:89] found id: ""
	I1202 22:29:38.480783  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.480804  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:38.480811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:38.480870  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:38.505019  546345 cri.go:89] found id: ""
	I1202 22:29:38.505044  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.505052  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:38.505059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:38.505116  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:38.528009  546345 cri.go:89] found id: ""
	I1202 22:29:38.528036  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.528045  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:38.528052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:38.528109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:38.595576  546345 cri.go:89] found id: ""
	I1202 22:29:38.595598  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.595606  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:38.595613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:38.595671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:38.638153  546345 cri.go:89] found id: ""
	I1202 22:29:38.638177  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.638186  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:38.638195  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:38.638206  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.653639  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:38.653696  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:38.715223  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:38.715245  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:38.715258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:38.739162  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:38.739196  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:38.766317  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:38.766345  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.321520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:41.331550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:41.331636  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:41.355934  546345 cri.go:89] found id: ""
	I1202 22:29:41.355959  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.355968  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:41.355975  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:41.356035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:41.381232  546345 cri.go:89] found id: ""
	I1202 22:29:41.381254  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.381263  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:41.381269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:41.381325  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:41.406147  546345 cri.go:89] found id: ""
	I1202 22:29:41.406171  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.406179  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:41.406186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:41.406246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:41.435516  546345 cri.go:89] found id: ""
	I1202 22:29:41.435542  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.435551  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:41.435559  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:41.435619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:41.460909  546345 cri.go:89] found id: ""
	I1202 22:29:41.460932  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.460941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:41.460948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:41.461035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:41.487520  546345 cri.go:89] found id: ""
	I1202 22:29:41.487553  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.487570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:41.487577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:41.487648  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:41.512354  546345 cri.go:89] found id: ""
	I1202 22:29:41.512425  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.512449  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:41.512469  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:41.512552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:41.536885  546345 cri.go:89] found id: ""
	I1202 22:29:41.536908  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.536917  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:41.536927  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:41.536938  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.607465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:41.607514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:41.635996  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:41.636025  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:41.712077  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:41.712100  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:41.712113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:41.736613  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:41.736660  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.265095  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:44.276615  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:44.276703  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:44.303304  546345 cri.go:89] found id: ""
	I1202 22:29:44.303325  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.303334  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:44.303340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:44.303403  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:44.333144  546345 cri.go:89] found id: ""
	I1202 22:29:44.333167  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.333176  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:44.333182  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:44.333258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:44.359646  546345 cri.go:89] found id: ""
	I1202 22:29:44.359675  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.359684  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:44.359691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:44.359751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:44.384230  546345 cri.go:89] found id: ""
	I1202 22:29:44.384255  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.384264  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:44.384270  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:44.384342  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:44.409648  546345 cri.go:89] found id: ""
	I1202 22:29:44.409701  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.409711  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:44.409718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:44.409776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:44.434410  546345 cri.go:89] found id: ""
	I1202 22:29:44.434437  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.434446  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:44.434452  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:44.434512  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:44.458352  546345 cri.go:89] found id: ""
	I1202 22:29:44.458376  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.458385  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:44.458392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:44.458465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:44.486353  546345 cri.go:89] found id: ""
	I1202 22:29:44.486385  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.486396  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:44.486420  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:44.486436  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:44.510698  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:44.510737  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.552264  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:44.552293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:44.660418  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:44.660451  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:44.676162  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:44.676230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:44.741313  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.241695  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:47.253909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:47.253977  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:47.280127  546345 cri.go:89] found id: ""
	I1202 22:29:47.280151  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.280159  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:47.280166  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:47.280227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:47.309688  546345 cri.go:89] found id: ""
	I1202 22:29:47.309711  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.309719  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:47.309726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:47.309795  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:47.334233  546345 cri.go:89] found id: ""
	I1202 22:29:47.334259  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.334268  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:47.334275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:47.334330  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:47.363203  546345 cri.go:89] found id: ""
	I1202 22:29:47.363228  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.363237  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:47.363245  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:47.363314  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:47.390073  546345 cri.go:89] found id: ""
	I1202 22:29:47.390096  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.390104  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:47.390111  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:47.390168  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:47.416413  546345 cri.go:89] found id: ""
	I1202 22:29:47.416435  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.416444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:47.416451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:47.416518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:47.440718  546345 cri.go:89] found id: ""
	I1202 22:29:47.440743  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.440753  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:47.440759  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:47.440818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:47.463875  546345 cri.go:89] found id: ""
	I1202 22:29:47.463901  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.463910  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:47.463920  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:47.463931  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:47.492814  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:47.492842  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:47.558225  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:47.558264  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:47.574145  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:47.574174  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:47.666298  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.666357  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:47.666385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.191511  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:50.202178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:50.202258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:50.229173  546345 cri.go:89] found id: ""
	I1202 22:29:50.229213  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.229222  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:50.229228  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:50.229293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:50.253932  546345 cri.go:89] found id: ""
	I1202 22:29:50.253962  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.253971  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:50.253977  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:50.254033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:50.278257  546345 cri.go:89] found id: ""
	I1202 22:29:50.278280  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.278289  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:50.278296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:50.278351  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:50.306884  546345 cri.go:89] found id: ""
	I1202 22:29:50.306907  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.306914  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:50.306921  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:50.306989  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:50.331454  546345 cri.go:89] found id: ""
	I1202 22:29:50.331528  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.331553  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:50.331566  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:50.331658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:50.355157  546345 cri.go:89] found id: ""
	I1202 22:29:50.355230  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.355254  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:50.355268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:50.355346  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:50.380390  546345 cri.go:89] found id: ""
	I1202 22:29:50.380415  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.380424  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:50.380430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:50.380518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:50.408708  546345 cri.go:89] found id: ""
	I1202 22:29:50.408733  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.408742  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:50.408751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:50.408800  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:50.466607  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:50.466641  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:50.482087  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:50.482154  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:50.548310  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:50.548334  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:50.548347  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.581455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:50.581492  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:51.497099  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:51.556470  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:51.556588  546345 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:53.133025  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:53.143115  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:53.143180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:53.166147  546345 cri.go:89] found id: ""
	I1202 22:29:53.166169  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.166177  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:53.166183  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:53.166251  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:53.191215  546345 cri.go:89] found id: ""
	I1202 22:29:53.191238  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.191247  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:53.191253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:53.191329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:53.214527  546345 cri.go:89] found id: ""
	I1202 22:29:53.214593  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.214616  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:53.214631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:53.214701  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:53.239062  546345 cri.go:89] found id: ""
	I1202 22:29:53.239089  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.239098  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:53.239105  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:53.239270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:53.269346  546345 cri.go:89] found id: ""
	I1202 22:29:53.269416  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.269440  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:53.269462  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:53.269571  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:53.293728  546345 cri.go:89] found id: ""
	I1202 22:29:53.293802  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.293825  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:53.293845  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:53.293942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:53.322079  546345 cri.go:89] found id: ""
	I1202 22:29:53.322106  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.322115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:53.322121  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:53.322180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:53.345988  546345 cri.go:89] found id: ""
	I1202 22:29:53.346055  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.346079  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:53.346103  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:53.346128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:53.402872  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:53.402909  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:53.418121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:53.418150  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:53.480652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:53.480725  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:53.480756  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:53.505378  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:53.505414  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.037255  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:56.048340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:56.048412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:56.080851  546345 cri.go:89] found id: ""
	I1202 22:29:56.080878  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.080888  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:56.080894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:56.080963  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:56.105446  546345 cri.go:89] found id: ""
	I1202 22:29:56.105472  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.105481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:56.105488  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:56.105545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:56.131318  546345 cri.go:89] found id: ""
	I1202 22:29:56.131344  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.131352  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:56.131358  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:56.131414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:56.159096  546345 cri.go:89] found id: ""
	I1202 22:29:56.159118  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.159126  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:56.159132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:56.159191  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:56.183173  546345 cri.go:89] found id: ""
	I1202 22:29:56.183199  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.183207  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:56.183214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:56.183279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:56.207984  546345 cri.go:89] found id: ""
	I1202 22:29:56.208017  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.208029  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:56.208035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:56.208095  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:56.232594  546345 cri.go:89] found id: ""
	I1202 22:29:56.232617  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.232625  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:56.232632  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:56.232699  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:56.257221  546345 cri.go:89] found id: ""
	I1202 22:29:56.257247  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.257256  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:56.257265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:56.257278  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.283035  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:56.283061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:56.339962  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:56.339997  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:56.355699  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:56.355773  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:56.414625  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:56.414693  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:56.414738  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:56.879279  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:56.938440  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:56.938561  546345 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:58.938802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:58.951366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:58.951487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:58.978893  546345 cri.go:89] found id: ""
	I1202 22:29:58.978916  546345 logs.go:282] 0 containers: []
	W1202 22:29:58.978924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:58.978931  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:58.978990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:59.005270  546345 cri.go:89] found id: ""
	I1202 22:29:59.005299  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.005309  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:59.005316  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:59.005396  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:59.029424  546345 cri.go:89] found id: ""
	I1202 22:29:59.029453  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.029461  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:59.029468  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:59.029525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:59.053363  546345 cri.go:89] found id: ""
	I1202 22:29:59.053398  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.053407  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:59.053414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:59.053481  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:59.078974  546345 cri.go:89] found id: ""
	I1202 22:29:59.079051  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.079073  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:59.079088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:59.079162  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:59.103336  546345 cri.go:89] found id: ""
	I1202 22:29:59.103358  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.103366  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:59.103383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:59.103441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:59.127855  546345 cri.go:89] found id: ""
	I1202 22:29:59.127929  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.127952  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:59.127972  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:59.128077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:59.151167  546345 cri.go:89] found id: ""
	I1202 22:29:59.151196  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.151204  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:59.151213  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:59.151224  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:59.208516  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:59.208559  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:59.224755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:59.224780  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:59.286748  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:59.286772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:59.286787  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:59.311855  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:59.311889  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:01.391459  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:30:01.475431  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:30:01.475652  546345 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:30:01.478828  546345 out.go:179] * Enabled addons: 
	I1202 22:30:01.482057  546345 addons.go:530] duration metric: took 1m44.065625472s for enable addons: enabled=[]
	I1202 22:30:01.843006  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:01.854584  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:01.854684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:01.885465  546345 cri.go:89] found id: ""
	I1202 22:30:01.885501  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.885510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:01.885517  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:01.885587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:01.917316  546345 cri.go:89] found id: ""
	I1202 22:30:01.917348  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.917359  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:01.917366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:01.917463  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:01.943052  546345 cri.go:89] found id: ""
	I1202 22:30:01.943078  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.943086  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:01.943093  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:01.943153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:01.969294  546345 cri.go:89] found id: ""
	I1202 22:30:01.969321  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.969330  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:01.969339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:01.969402  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:01.996336  546345 cri.go:89] found id: ""
	I1202 22:30:01.996405  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.996428  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:01.996449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:01.996537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:02.025075  546345 cri.go:89] found id: ""
	I1202 22:30:02.025158  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.025183  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:02.025203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:02.025300  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:02.078384  546345 cri.go:89] found id: ""
	I1202 22:30:02.078450  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.078474  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:02.078493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:02.078585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:02.124922  546345 cri.go:89] found id: ""
	I1202 22:30:02.125001  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.125021  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:02.125031  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:02.125044  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:02.197595  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:02.197618  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:02.197634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:02.223170  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:02.223203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:02.255281  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:02.255348  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:02.310654  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:02.310690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:04.828623  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:04.839157  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:04.839282  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:04.863863  546345 cri.go:89] found id: ""
	I1202 22:30:04.863887  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.863896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:04.863903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:04.863996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:04.890006  546345 cri.go:89] found id: ""
	I1202 22:30:04.890031  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.890040  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:04.890047  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:04.890146  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:04.915998  546345 cri.go:89] found id: ""
	I1202 22:30:04.916021  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.916035  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:04.916042  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:04.916100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:04.940395  546345 cri.go:89] found id: ""
	I1202 22:30:04.940420  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.940429  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:04.940435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:04.940495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:04.964621  546345 cri.go:89] found id: ""
	I1202 22:30:04.964650  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.964660  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:04.964667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:04.964737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:04.989632  546345 cri.go:89] found id: ""
	I1202 22:30:04.989685  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.989694  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:04.989702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:04.989760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:05.019501  546345 cri.go:89] found id: ""
	I1202 22:30:05.019528  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.019537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:05.019545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:05.019610  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:05.049637  546345 cri.go:89] found id: ""
	I1202 22:30:05.049682  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.049690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:05.049700  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:05.049711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:05.088244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:05.088281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:05.133381  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:05.133409  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:05.194841  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:05.194874  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:05.210533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:05.210560  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:05.273348  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:07.774501  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:07.784828  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:07.784927  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:07.814568  546345 cri.go:89] found id: ""
	I1202 22:30:07.814610  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.814619  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:07.814627  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:07.814711  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:07.839281  546345 cri.go:89] found id: ""
	I1202 22:30:07.839306  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.839325  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:07.839333  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:07.839410  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:07.863734  546345 cri.go:89] found id: ""
	I1202 22:30:07.863756  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.863764  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:07.863771  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:07.863830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:07.887517  546345 cri.go:89] found id: ""
	I1202 22:30:07.887541  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.887549  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:07.887556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:07.887615  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:07.912577  546345 cri.go:89] found id: ""
	I1202 22:30:07.912599  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.912608  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:07.912614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:07.912684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:07.937037  546345 cri.go:89] found id: ""
	I1202 22:30:07.937062  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.937071  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:07.937088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:07.937153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:07.961873  546345 cri.go:89] found id: ""
	I1202 22:30:07.961901  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.961910  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:07.961916  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:07.961974  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:07.985864  546345 cri.go:89] found id: ""
	I1202 22:30:07.985890  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.985906  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:07.985917  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:07.985928  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:08.011244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:08.011284  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:08.055290  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:08.055321  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:08.134015  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:08.134069  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:08.154013  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:08.154041  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:08.223778  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:10.723964  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:10.736098  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:10.736214  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:10.761205  546345 cri.go:89] found id: ""
	I1202 22:30:10.761227  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.761236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:10.761243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:10.761303  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:10.785829  546345 cri.go:89] found id: ""
	I1202 22:30:10.785856  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.785865  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:10.785872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:10.785931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:10.815724  546345 cri.go:89] found id: ""
	I1202 22:30:10.815748  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.815757  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:10.815767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:10.815844  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:10.840563  546345 cri.go:89] found id: ""
	I1202 22:30:10.840586  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.840594  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:10.840601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:10.840667  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:10.869275  546345 cri.go:89] found id: ""
	I1202 22:30:10.869349  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.869372  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:10.869391  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:10.869478  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:10.894450  546345 cri.go:89] found id: ""
	I1202 22:30:10.894477  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.894486  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:10.894493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:10.894572  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:10.919134  546345 cri.go:89] found id: ""
	I1202 22:30:10.919161  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.919170  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:10.919177  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:10.919238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:10.944009  546345 cri.go:89] found id: ""
	I1202 22:30:10.944035  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.944044  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:10.944053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:10.944066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:11.000144  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:11.000183  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:11.018501  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:11.018532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:11.149770  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:11.149837  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:11.149860  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:11.175018  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:11.175055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.702967  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:13.713482  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:13.713560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:13.739844  546345 cri.go:89] found id: ""
	I1202 22:30:13.739867  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.739876  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:13.739886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:13.739943  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:13.765162  546345 cri.go:89] found id: ""
	I1202 22:30:13.765184  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.765192  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:13.765199  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:13.765256  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:13.790968  546345 cri.go:89] found id: ""
	I1202 22:30:13.790991  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.790999  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:13.791005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:13.791069  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:13.816755  546345 cri.go:89] found id: ""
	I1202 22:30:13.816791  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.816799  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:13.816806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:13.816869  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:13.843444  546345 cri.go:89] found id: ""
	I1202 22:30:13.843469  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.843477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:13.843484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:13.843551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:13.868489  546345 cri.go:89] found id: ""
	I1202 22:30:13.868514  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.868523  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:13.868530  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:13.868608  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:13.893527  546345 cri.go:89] found id: ""
	I1202 22:30:13.893552  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.893560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:13.893567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:13.893624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:13.919358  546345 cri.go:89] found id: ""
	I1202 22:30:13.919382  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.919390  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:13.919400  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:13.919411  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.946818  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:13.946846  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:14.004198  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:14.004294  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:14.021120  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:14.021157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:14.145347  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:14.145369  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:14.145382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:16.669687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:16.680323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:16.680426  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:16.705295  546345 cri.go:89] found id: ""
	I1202 22:30:16.705320  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.705329  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:16.705335  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:16.705394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:16.729538  546345 cri.go:89] found id: ""
	I1202 22:30:16.729633  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.729648  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:16.729682  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:16.729766  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:16.754022  546345 cri.go:89] found id: ""
	I1202 22:30:16.754045  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.754053  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:16.754059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:16.754119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:16.780138  546345 cri.go:89] found id: ""
	I1202 22:30:16.780163  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.780171  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:16.780178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:16.780237  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:16.805096  546345 cri.go:89] found id: ""
	I1202 22:30:16.805123  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.805134  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:16.805141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:16.805201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:16.830436  546345 cri.go:89] found id: ""
	I1202 22:30:16.830461  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.830470  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:16.830477  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:16.830537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:16.859101  546345 cri.go:89] found id: ""
	I1202 22:30:16.859126  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.859135  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:16.859142  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:16.859201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:16.884001  546345 cri.go:89] found id: ""
	I1202 22:30:16.884025  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.884033  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:16.884043  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:16.884054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:16.919216  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:16.919242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:16.974540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:16.974574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:16.990333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:16.990361  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:17.096330  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:17.096351  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:17.096363  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:19.641119  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:19.651302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:19.651372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:19.675892  546345 cri.go:89] found id: ""
	I1202 22:30:19.675920  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.675929  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:19.675935  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:19.675993  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:19.700442  546345 cri.go:89] found id: ""
	I1202 22:30:19.700472  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.700480  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:19.700487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:19.700545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:19.724905  546345 cri.go:89] found id: ""
	I1202 22:30:19.724933  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.724941  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:19.724948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:19.725008  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:19.749042  546345 cri.go:89] found id: ""
	I1202 22:30:19.749064  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.749072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:19.749079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:19.749142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:19.772319  546345 cri.go:89] found id: ""
	I1202 22:30:19.772346  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.772354  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:19.772361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:19.772423  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:19.796590  546345 cri.go:89] found id: ""
	I1202 22:30:19.796661  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.796685  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:19.796706  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:19.796791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:19.820897  546345 cri.go:89] found id: ""
	I1202 22:30:19.820971  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.820994  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:19.821013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:19.821097  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:19.845057  546345 cri.go:89] found id: ""
	I1202 22:30:19.845127  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.845151  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:19.845173  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:19.845210  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:19.901157  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:19.901190  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:19.916681  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:19.916709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:19.978835  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:19.978855  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:19.978868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:20.003532  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:20.003576  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:22.540194  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:22.550669  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:22.550752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:22.575137  546345 cri.go:89] found id: ""
	I1202 22:30:22.575162  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.575179  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:22.575186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:22.575246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:22.600172  546345 cri.go:89] found id: ""
	I1202 22:30:22.600199  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.600208  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:22.600214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:22.600280  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:22.627626  546345 cri.go:89] found id: ""
	I1202 22:30:22.627652  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.627661  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:22.627667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:22.627727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:22.652380  546345 cri.go:89] found id: ""
	I1202 22:30:22.652407  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.652416  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:22.652422  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:22.652483  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:22.679899  546345 cri.go:89] found id: ""
	I1202 22:30:22.679924  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.679933  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:22.679939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:22.679999  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:22.704508  546345 cri.go:89] found id: ""
	I1202 22:30:22.704533  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.704542  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:22.704548  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:22.704623  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:22.729344  546345 cri.go:89] found id: ""
	I1202 22:30:22.729372  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.729380  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:22.729387  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:22.729451  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:22.753872  546345 cri.go:89] found id: ""
	I1202 22:30:22.753899  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.753908  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:22.753918  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:22.753929  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:22.810619  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:22.810654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:22.826861  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:22.826887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:22.891768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:22.891788  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:22.891801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:22.915527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:22.915563  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.443424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:25.454070  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:25.454140  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:25.477867  546345 cri.go:89] found id: ""
	I1202 22:30:25.477888  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.477896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:25.477902  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:25.477961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:25.503405  546345 cri.go:89] found id: ""
	I1202 22:30:25.503440  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.503449  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:25.503456  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:25.503548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:25.528678  546345 cri.go:89] found id: ""
	I1202 22:30:25.528703  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.528711  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:25.528718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:25.528784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:25.555479  546345 cri.go:89] found id: ""
	I1202 22:30:25.555505  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.555513  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:25.555520  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:25.555587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:25.588375  546345 cri.go:89] found id: ""
	I1202 22:30:25.588398  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.588408  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:25.588415  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:25.588475  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:25.613403  546345 cri.go:89] found id: ""
	I1202 22:30:25.613488  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.613511  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:25.613532  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:25.613627  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:25.644249  546345 cri.go:89] found id: ""
	I1202 22:30:25.644273  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.644282  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:25.644289  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:25.644348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:25.669360  546345 cri.go:89] found id: ""
	I1202 22:30:25.669385  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.669394  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:25.669432  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:25.669448  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.701067  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:25.701095  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:25.755359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:25.755393  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:25.771118  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:25.771147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:25.830809  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:25.830832  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:25.830845  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.355998  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:28.366515  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:28.366588  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:28.391593  546345 cri.go:89] found id: ""
	I1202 22:30:28.391618  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.391627  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:28.391634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:28.391694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:28.420025  546345 cri.go:89] found id: ""
	I1202 22:30:28.420051  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.420060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:28.420073  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:28.420137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:28.444623  546345 cri.go:89] found id: ""
	I1202 22:30:28.444647  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.444655  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:28.444662  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:28.444726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:28.469992  546345 cri.go:89] found id: ""
	I1202 22:30:28.470015  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.470024  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:28.470030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:28.470089  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:28.495503  546345 cri.go:89] found id: ""
	I1202 22:30:28.495580  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.495602  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:28.495616  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:28.495687  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:28.520105  546345 cri.go:89] found id: ""
	I1202 22:30:28.520130  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.520139  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:28.520145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:28.520207  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:28.547412  546345 cri.go:89] found id: ""
	I1202 22:30:28.547444  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.547454  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:28.547460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:28.547522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:28.572324  546345 cri.go:89] found id: ""
	I1202 22:30:28.572349  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.572358  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:28.572367  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:28.572379  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:28.587929  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:28.587952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:28.651756  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:28.651790  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:28.651803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.676386  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:28.676421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:28.708051  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:28.708079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.265370  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:31.275659  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:31.275728  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:31.335888  546345 cri.go:89] found id: ""
	I1202 22:30:31.335928  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.335956  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:31.335970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:31.336049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:31.386854  546345 cri.go:89] found id: ""
	I1202 22:30:31.386880  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.386888  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:31.386895  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:31.386979  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:31.410707  546345 cri.go:89] found id: ""
	I1202 22:30:31.410731  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.410739  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:31.410746  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:31.410804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:31.439172  546345 cri.go:89] found id: ""
	I1202 22:30:31.439239  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.439263  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:31.439276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:31.439355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:31.467199  546345 cri.go:89] found id: ""
	I1202 22:30:31.467277  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.467293  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:31.467301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:31.467390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:31.495081  546345 cri.go:89] found id: ""
	I1202 22:30:31.495155  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.495178  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:31.495193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:31.495270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:31.518280  546345 cri.go:89] found id: ""
	I1202 22:30:31.518306  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.518315  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:31.518323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:31.518400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:31.543715  546345 cri.go:89] found id: ""
	I1202 22:30:31.543757  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.543793  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:31.543809  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:31.543821  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.601359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:31.601392  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:31.617291  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:31.617323  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:31.682689  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:31.682713  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:31.682727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:31.706626  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:31.706661  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:34.235905  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:34.246438  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:34.246560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:34.271279  546345 cri.go:89] found id: ""
	I1202 22:30:34.271350  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.271365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:34.271374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:34.271434  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:34.303460  546345 cri.go:89] found id: ""
	I1202 22:30:34.303498  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.303507  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:34.303513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:34.303635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:34.355759  546345 cri.go:89] found id: ""
	I1202 22:30:34.355786  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.355795  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:34.355801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:34.355908  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:34.402466  546345 cri.go:89] found id: ""
	I1202 22:30:34.402553  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.402572  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:34.402580  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:34.402654  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:34.431909  546345 cri.go:89] found id: ""
	I1202 22:30:34.431932  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.431941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:34.431947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:34.432004  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:34.455451  546345 cri.go:89] found id: ""
	I1202 22:30:34.455476  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.455484  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:34.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:34.455632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:34.478771  546345 cri.go:89] found id: ""
	I1202 22:30:34.478797  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.478805  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:34.478812  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:34.478904  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:34.502377  546345 cri.go:89] found id: ""
	I1202 22:30:34.502452  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.502468  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:34.502479  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:34.502490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:34.559881  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:34.559925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:34.576755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:34.576785  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:34.640203  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:34.640223  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:34.640236  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:34.664331  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:34.664368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:37.198596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:37.208910  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:37.208981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:37.233321  546345 cri.go:89] found id: ""
	I1202 22:30:37.233346  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.233354  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:37.233361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:37.233419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:37.259307  546345 cri.go:89] found id: ""
	I1202 22:30:37.259331  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.259340  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:37.259346  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:37.259404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:37.282333  546345 cri.go:89] found id: ""
	I1202 22:30:37.282358  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.282367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:37.282373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:37.282430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:37.351993  546345 cri.go:89] found id: ""
	I1202 22:30:37.352018  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.352027  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:37.352034  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:37.352124  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:37.398805  546345 cri.go:89] found id: ""
	I1202 22:30:37.398829  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.398840  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:37.398847  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:37.398912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:37.422987  546345 cri.go:89] found id: ""
	I1202 22:30:37.423010  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.423019  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:37.423026  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:37.423100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:37.447502  546345 cri.go:89] found id: ""
	I1202 22:30:37.447528  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.447537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:37.447544  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:37.447630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:37.471899  546345 cri.go:89] found id: ""
	I1202 22:30:37.471934  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.471943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:37.471952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:37.471963  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:37.528313  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:37.528350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:37.544433  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:37.544464  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:37.611970  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:37.611994  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:37.612007  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:37.636937  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:37.636971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:40.165587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:40.177235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:40.177323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:40.205543  546345 cri.go:89] found id: ""
	I1202 22:30:40.205568  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.205576  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:40.205583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:40.205644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:40.232642  546345 cri.go:89] found id: ""
	I1202 22:30:40.232668  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.232677  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:40.232684  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:40.232746  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:40.259447  546345 cri.go:89] found id: ""
	I1202 22:30:40.259482  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.259496  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:40.259503  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:40.259591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:40.297166  546345 cri.go:89] found id: ""
	I1202 22:30:40.297190  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.297198  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:40.297205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:40.297268  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:40.337983  546345 cri.go:89] found id: ""
	I1202 22:30:40.338005  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.338014  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:40.338020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:40.338079  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:40.380237  546345 cri.go:89] found id: ""
	I1202 22:30:40.380266  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.380274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:40.380282  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:40.380343  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:40.412498  546345 cri.go:89] found id: ""
	I1202 22:30:40.412563  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.412572  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:40.412579  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:40.412637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:40.441910  546345 cri.go:89] found id: ""
	I1202 22:30:40.441934  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.441943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:40.441952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:40.441969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:40.496209  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:40.496245  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:40.512922  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:40.512953  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:40.580850  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:40.580875  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:40.580887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:40.605967  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:40.606001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:43.139166  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:43.149443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:43.149516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:43.177064  546345 cri.go:89] found id: ""
	I1202 22:30:43.177091  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.177099  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:43.177106  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:43.177164  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:43.201811  546345 cri.go:89] found id: ""
	I1202 22:30:43.201837  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.201845  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:43.201852  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:43.201912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:43.225492  546345 cri.go:89] found id: ""
	I1202 22:30:43.225520  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.225529  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:43.225536  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:43.225594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:43.249036  546345 cri.go:89] found id: ""
	I1202 22:30:43.249064  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.249072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:43.249079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:43.249139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:43.277251  546345 cri.go:89] found id: ""
	I1202 22:30:43.277276  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.277285  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:43.277297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:43.277354  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:43.314360  546345 cri.go:89] found id: ""
	I1202 22:30:43.314396  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.314406  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:43.314413  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:43.314488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:43.374629  546345 cri.go:89] found id: ""
	I1202 22:30:43.374657  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.374666  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:43.374672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:43.374730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:43.416766  546345 cri.go:89] found id: ""
	I1202 22:30:43.416794  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.416803  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:43.416812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:43.416823  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:43.471606  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:43.471644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:43.487334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:43.487362  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:43.553915  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:43.553939  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:43.553952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:43.579222  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:43.579258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.107248  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:46.118081  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:46.118150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:46.142754  546345 cri.go:89] found id: ""
	I1202 22:30:46.142781  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.142789  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:46.142796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:46.142861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:46.169825  546345 cri.go:89] found id: ""
	I1202 22:30:46.169849  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.169858  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:46.169864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:46.169929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:46.196691  546345 cri.go:89] found id: ""
	I1202 22:30:46.196719  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.196728  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:46.196734  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:46.196796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:46.221449  546345 cri.go:89] found id: ""
	I1202 22:30:46.221476  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.221485  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:46.221492  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:46.221552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:46.246043  546345 cri.go:89] found id: ""
	I1202 22:30:46.246108  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.246131  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:46.246145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:46.246227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:46.271663  546345 cri.go:89] found id: ""
	I1202 22:30:46.271687  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.271695  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:46.271702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:46.271760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:46.315379  546345 cri.go:89] found id: ""
	I1202 22:30:46.315404  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.315413  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:46.315420  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:46.315477  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:46.359855  546345 cri.go:89] found id: ""
	I1202 22:30:46.359883  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.359893  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:46.359903  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:46.359915  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:46.377127  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:46.377158  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:46.445559  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:46.445583  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:46.445605  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:46.473713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:46.473754  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.501189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:46.501221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.058128  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:49.068126  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:49.068198  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:49.092263  546345 cri.go:89] found id: ""
	I1202 22:30:49.092288  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.092297  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:49.092303  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:49.092360  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:49.115983  546345 cri.go:89] found id: ""
	I1202 22:30:49.116008  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.116017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:49.116024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:49.116081  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:49.139874  546345 cri.go:89] found id: ""
	I1202 22:30:49.139899  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.139908  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:49.139915  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:49.139971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:49.164359  546345 cri.go:89] found id: ""
	I1202 22:30:49.164388  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.164397  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:49.164404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:49.164485  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:49.189339  546345 cri.go:89] found id: ""
	I1202 22:30:49.189365  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.189374  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:49.189383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:49.189440  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:49.213800  546345 cri.go:89] found id: ""
	I1202 22:30:49.213826  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.213835  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:49.213842  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:49.213899  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:49.238436  546345 cri.go:89] found id: ""
	I1202 22:30:49.238463  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.238473  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:49.238480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:49.238540  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:49.267385  546345 cri.go:89] found id: ""
	I1202 22:30:49.267459  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.267483  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:49.267500  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:49.267523  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.332624  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:49.332664  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:49.365875  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:49.365902  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:49.443796  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:49.443869  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:49.443888  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:49.467900  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:49.467933  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:51.996457  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:52.009596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:52.009694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:52.037146  546345 cri.go:89] found id: ""
	I1202 22:30:52.037172  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.037190  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:52.037197  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:52.037257  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:52.063683  546345 cri.go:89] found id: ""
	I1202 22:30:52.063708  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.063717  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:52.063724  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:52.063786  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:52.089573  546345 cri.go:89] found id: ""
	I1202 22:30:52.089598  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.089606  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:52.089613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:52.089704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:52.114785  546345 cri.go:89] found id: ""
	I1202 22:30:52.114810  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.114819  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:52.114826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:52.114884  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:52.137456  546345 cri.go:89] found id: ""
	I1202 22:30:52.137479  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.137489  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:52.137495  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:52.137552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:52.161392  546345 cri.go:89] found id: ""
	I1202 22:30:52.161418  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.161426  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:52.161433  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:52.161544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:52.186619  546345 cri.go:89] found id: ""
	I1202 22:30:52.186640  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.186648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:52.186658  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:52.186717  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:52.211047  546345 cri.go:89] found id: ""
	I1202 22:30:52.211069  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.211077  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:52.211086  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:52.211097  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:52.240049  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:52.240079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:52.297727  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:52.297804  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:52.326988  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:52.327061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:52.421545  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:52.421566  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:52.421578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:54.945402  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:54.955618  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:54.955688  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:54.981106  546345 cri.go:89] found id: ""
	I1202 22:30:54.981132  546345 logs.go:282] 0 containers: []
	W1202 22:30:54.981140  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:54.981147  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:54.981210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:55.017766  546345 cri.go:89] found id: ""
	I1202 22:30:55.017789  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.017798  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:55.017805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:55.017886  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:55.051218  546345 cri.go:89] found id: ""
	I1202 22:30:55.051293  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.051320  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:55.051342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:55.051449  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:55.092842  546345 cri.go:89] found id: ""
	I1202 22:30:55.092869  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.092879  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:55.092886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:55.092955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:55.131432  546345 cri.go:89] found id: ""
	I1202 22:30:55.131517  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.131546  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:55.131570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:55.131702  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:55.166611  546345 cri.go:89] found id: ""
	I1202 22:30:55.166639  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.166653  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:55.166661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:55.166737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:55.197157  546345 cri.go:89] found id: ""
	I1202 22:30:55.197183  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.197199  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:55.197206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:55.197277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:55.229014  546345 cri.go:89] found id: ""
	I1202 22:30:55.229045  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.229053  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:55.229062  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:55.229074  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:55.284839  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:55.284877  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:55.312855  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:55.312884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:55.414558  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:55.414580  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:55.414595  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:55.439435  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:55.439472  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:57.966587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:57.977332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:57.977425  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:58.009109  546345 cri.go:89] found id: ""
	I1202 22:30:58.009146  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.009155  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:58.009162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:58.009277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:58.034957  546345 cri.go:89] found id: ""
	I1202 22:30:58.034980  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.034989  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:58.034996  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:58.035075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:58.059651  546345 cri.go:89] found id: ""
	I1202 22:30:58.059677  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.059687  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:58.059694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:58.059754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:58.092476  546345 cri.go:89] found id: ""
	I1202 22:30:58.092510  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.092520  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:58.092527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:58.092601  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:58.116505  546345 cri.go:89] found id: ""
	I1202 22:30:58.116531  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.116539  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:58.116545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:58.116617  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:58.141152  546345 cri.go:89] found id: ""
	I1202 22:30:58.141180  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.141189  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:58.141196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:58.141252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:58.167272  546345 cri.go:89] found id: ""
	I1202 22:30:58.167294  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.167302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:58.167308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:58.167365  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:58.193236  546345 cri.go:89] found id: ""
	I1202 22:30:58.193311  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.193334  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:58.193351  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:58.193374  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:58.248292  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:58.248365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:58.263580  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:58.263610  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:58.374750  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:58.374772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:58.374784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:58.401522  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:58.401558  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:00.931781  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:00.941965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:00.942042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:00.966926  546345 cri.go:89] found id: ""
	I1202 22:31:00.966950  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.966958  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:00.966965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:00.967026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:00.991438  546345 cri.go:89] found id: ""
	I1202 22:31:00.991463  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.991472  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:00.991479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:00.991538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:01.019713  546345 cri.go:89] found id: ""
	I1202 22:31:01.019737  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.019745  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:01.019752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:01.019809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:01.044143  546345 cri.go:89] found id: ""
	I1202 22:31:01.044166  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.044174  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:01.044181  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:01.044240  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:01.069071  546345 cri.go:89] found id: ""
	I1202 22:31:01.069094  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.069102  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:01.069109  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:01.069170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:01.097613  546345 cri.go:89] found id: ""
	I1202 22:31:01.097639  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.097648  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:01.097688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:01.097754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:01.124227  546345 cri.go:89] found id: ""
	I1202 22:31:01.124251  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.124260  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:01.124267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:01.124329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:01.150457  546345 cri.go:89] found id: ""
	I1202 22:31:01.150483  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.150491  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:01.150501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:01.150512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:01.175721  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:01.175753  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:01.204876  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:01.204907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:01.261532  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:01.261567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:01.277504  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:01.277531  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:01.369721  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:03.870061  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:03.880451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:03.880522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:03.903663  546345 cri.go:89] found id: ""
	I1202 22:31:03.903688  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.903698  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:03.903704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:03.903767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:03.927883  546345 cri.go:89] found id: ""
	I1202 22:31:03.927904  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.927913  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:03.927920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:03.927982  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:03.952301  546345 cri.go:89] found id: ""
	I1202 22:31:03.952324  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.952332  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:03.952339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:03.952397  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:03.977367  546345 cri.go:89] found id: ""
	I1202 22:31:03.977390  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.977399  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:03.977406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:03.977465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:04.003308  546345 cri.go:89] found id: ""
	I1202 22:31:04.003336  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.003347  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:04.003361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:04.003438  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:04.030694  546345 cri.go:89] found id: ""
	I1202 22:31:04.030718  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.030731  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:04.030738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:04.030812  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:04.056404  546345 cri.go:89] found id: ""
	I1202 22:31:04.056430  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.056439  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:04.056446  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:04.056506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:04.081740  546345 cri.go:89] found id: ""
	I1202 22:31:04.081762  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.081770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:04.081779  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:04.081792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:04.109259  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:04.109285  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:04.165104  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:04.165137  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:04.181694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:04.181725  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:04.241465  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:04.241493  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:04.241506  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:06.766561  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:06.777372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:06.777445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:06.807209  546345 cri.go:89] found id: ""
	I1202 22:31:06.807235  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.807244  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:06.807251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:06.807356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:06.833401  546345 cri.go:89] found id: ""
	I1202 22:31:06.833424  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.833433  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:06.833439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:06.833497  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:06.858407  546345 cri.go:89] found id: ""
	I1202 22:31:06.858434  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.858442  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:06.858449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:06.858509  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:06.884341  546345 cri.go:89] found id: ""
	I1202 22:31:06.884367  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.884375  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:06.884382  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:06.884445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:06.911764  546345 cri.go:89] found id: ""
	I1202 22:31:06.911787  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.911796  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:06.911802  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:06.911861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:06.940179  546345 cri.go:89] found id: ""
	I1202 22:31:06.940204  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.940217  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:06.940225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:06.940289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:06.965277  546345 cri.go:89] found id: ""
	I1202 22:31:06.965304  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.965313  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:06.965320  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:06.965390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:06.991270  546345 cri.go:89] found id: ""
	I1202 22:31:06.991294  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.991303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:06.991313  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:06.991326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:07.060741  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:07.060762  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:07.060778  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:07.085921  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:07.085970  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:07.113268  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:07.113298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:07.169055  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:07.169092  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:09.686487  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:09.697143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:09.697217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:09.722726  546345 cri.go:89] found id: ""
	I1202 22:31:09.722749  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.722760  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:09.722767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:09.722826  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:09.748226  546345 cri.go:89] found id: ""
	I1202 22:31:09.748251  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.748260  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:09.748267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:09.748327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:09.774010  546345 cri.go:89] found id: ""
	I1202 22:31:09.774035  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.774043  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:09.774050  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:09.774109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:09.800227  546345 cri.go:89] found id: ""
	I1202 22:31:09.800250  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.800259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:09.800266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:09.800328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:09.828744  546345 cri.go:89] found id: ""
	I1202 22:31:09.828768  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.828777  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:09.828784  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:09.828843  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:09.853554  546345 cri.go:89] found id: ""
	I1202 22:31:09.853577  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.853586  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:09.853593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:09.853672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:09.879248  546345 cri.go:89] found id: ""
	I1202 22:31:09.879271  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.879279  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:09.879285  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:09.879350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:09.908338  546345 cri.go:89] found id: ""
	I1202 22:31:09.908364  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.908373  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:09.908383  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:09.908394  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:09.936944  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:09.936974  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:09.993598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:09.993644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:10.010732  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:10.010766  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:10.084652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:10.084677  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:10.084692  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:12.613817  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:12.624680  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:12.624765  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:12.651202  546345 cri.go:89] found id: ""
	I1202 22:31:12.651227  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.651236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:12.651243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:12.651301  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:12.676106  546345 cri.go:89] found id: ""
	I1202 22:31:12.676130  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.676138  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:12.676145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:12.676202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:12.700680  546345 cri.go:89] found id: ""
	I1202 22:31:12.700706  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.700716  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:12.700723  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:12.700787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:12.726023  546345 cri.go:89] found id: ""
	I1202 22:31:12.726049  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.726059  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:12.726066  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:12.726126  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:12.750927  546345 cri.go:89] found id: ""
	I1202 22:31:12.750951  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.750959  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:12.750966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:12.751026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:12.777535  546345 cri.go:89] found id: ""
	I1202 22:31:12.777562  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.777570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:12.777577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:12.777634  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:12.801546  546345 cri.go:89] found id: ""
	I1202 22:31:12.801572  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.801581  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:12.801588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:12.801646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:12.829909  546345 cri.go:89] found id: ""
	I1202 22:31:12.829932  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.829941  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:12.829950  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:12.829961  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:12.859869  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:12.859896  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:12.914732  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:12.914767  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:12.930844  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:12.930875  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:12.995842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:12.995865  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:12.995879  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.522875  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:15.533513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:15.533591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:15.569400  546345 cri.go:89] found id: ""
	I1202 22:31:15.569424  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.569433  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:15.569439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:15.569496  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:15.628130  546345 cri.go:89] found id: ""
	I1202 22:31:15.628152  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.628161  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:15.628167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:15.628228  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:15.653054  546345 cri.go:89] found id: ""
	I1202 22:31:15.653076  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.653085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:15.653092  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:15.653149  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:15.678257  546345 cri.go:89] found id: ""
	I1202 22:31:15.678281  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.678290  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:15.678296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:15.678353  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:15.702830  546345 cri.go:89] found id: ""
	I1202 22:31:15.702856  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.702864  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:15.702871  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:15.702936  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:15.728236  546345 cri.go:89] found id: ""
	I1202 22:31:15.728261  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.728270  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:15.728276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:15.728336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:15.753646  546345 cri.go:89] found id: ""
	I1202 22:31:15.753694  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.753703  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:15.753710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:15.753772  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:15.778069  546345 cri.go:89] found id: ""
	I1202 22:31:15.778092  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.778101  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:15.778110  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:15.778121  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:15.834182  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:15.834217  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:15.850533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:15.850572  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:15.911589  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:15.911609  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:15.911621  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.936945  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:15.936977  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.470112  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:18.480648  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:18.480727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:18.508083  546345 cri.go:89] found id: ""
	I1202 22:31:18.508109  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.508117  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:18.508124  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:18.508252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:18.533123  546345 cri.go:89] found id: ""
	I1202 22:31:18.533149  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.533164  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:18.533172  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:18.533245  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:18.586767  546345 cri.go:89] found id: ""
	I1202 22:31:18.586791  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.586800  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:18.586806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:18.586866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:18.626205  546345 cri.go:89] found id: ""
	I1202 22:31:18.626227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.626236  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:18.626242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:18.626299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:18.653977  546345 cri.go:89] found id: ""
	I1202 22:31:18.653998  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.654007  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:18.654013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:18.654074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:18.679194  546345 cri.go:89] found id: ""
	I1202 22:31:18.679227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.679237  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:18.679244  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:18.679305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:18.704215  546345 cri.go:89] found id: ""
	I1202 22:31:18.704280  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.704305  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:18.704326  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:18.704411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:18.729467  546345 cri.go:89] found id: ""
	I1202 22:31:18.729536  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.729560  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:18.729583  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:18.729624  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:18.745333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:18.745406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:18.810842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:18.810886  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:18.810899  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:18.836014  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:18.836050  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.864189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:18.864230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.420147  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:21.430404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:21.430516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:21.454558  546345 cri.go:89] found id: ""
	I1202 22:31:21.454583  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.454592  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:21.454599  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:21.454658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:21.478328  546345 cri.go:89] found id: ""
	I1202 22:31:21.478360  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.478369  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:21.478377  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:21.478445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:21.502704  546345 cri.go:89] found id: ""
	I1202 22:31:21.502729  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.502737  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:21.502744  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:21.502805  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:21.528175  546345 cri.go:89] found id: ""
	I1202 22:31:21.528201  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.528209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:21.528216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:21.528278  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:21.616594  546345 cri.go:89] found id: ""
	I1202 22:31:21.616622  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.616632  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:21.616638  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:21.616697  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:21.645131  546345 cri.go:89] found id: ""
	I1202 22:31:21.645160  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.645168  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:21.645178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:21.645238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:21.671523  546345 cri.go:89] found id: ""
	I1202 22:31:21.671545  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.671553  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:21.671564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:21.671624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:21.695173  546345 cri.go:89] found id: ""
	I1202 22:31:21.695195  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.695203  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:21.695212  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:21.695222  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:21.719757  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:21.719792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:21.749635  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:21.749681  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.808026  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:21.808062  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:21.823780  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:21.823809  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:21.884457  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.384744  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:24.394799  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:24.394871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:24.420708  546345 cri.go:89] found id: ""
	I1202 22:31:24.420731  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.420740  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:24.420747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:24.420804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:24.444913  546345 cri.go:89] found id: ""
	I1202 22:31:24.444938  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.444947  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:24.444953  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:24.445011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:24.468474  546345 cri.go:89] found id: ""
	I1202 22:31:24.468562  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.468586  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:24.468619  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:24.468712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:24.492364  546345 cri.go:89] found id: ""
	I1202 22:31:24.492435  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.492459  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:24.492479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:24.492570  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:24.517358  546345 cri.go:89] found id: ""
	I1202 22:31:24.517434  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.517473  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:24.517498  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:24.517589  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:24.556717  546345 cri.go:89] found id: ""
	I1202 22:31:24.556800  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.556829  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:24.556870  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:24.556990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:24.641499  546345 cri.go:89] found id: ""
	I1202 22:31:24.641533  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.641542  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:24.641549  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:24.641704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:24.665998  546345 cri.go:89] found id: ""
	I1202 22:31:24.666024  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.666032  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:24.666041  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:24.666053  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:24.720801  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:24.720835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:24.736228  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:24.736255  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:24.802911  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.802934  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:24.802948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:24.826675  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:24.826710  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:27.352424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:27.363728  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:27.363800  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:27.388330  546345 cri.go:89] found id: ""
	I1202 22:31:27.388356  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.388365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:27.388372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:27.388430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:27.412561  546345 cri.go:89] found id: ""
	I1202 22:31:27.412589  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.412598  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:27.412605  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:27.412664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:27.436953  546345 cri.go:89] found id: ""
	I1202 22:31:27.436982  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.436991  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:27.436997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:27.437057  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:27.461746  546345 cri.go:89] found id: ""
	I1202 22:31:27.461775  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.461783  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:27.461790  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:27.461847  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:27.489561  546345 cri.go:89] found id: ""
	I1202 22:31:27.489598  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.489607  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:27.489614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:27.489708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:27.517814  546345 cri.go:89] found id: ""
	I1202 22:31:27.517835  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.517844  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:27.517851  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:27.517909  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:27.545611  546345 cri.go:89] found id: ""
	I1202 22:31:27.545711  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.545734  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:27.545754  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:27.545839  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:27.603444  546345 cri.go:89] found id: ""
	I1202 22:31:27.603466  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.603474  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:27.603484  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:27.603497  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:27.674112  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:27.674149  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:27.690096  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:27.690128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:27.752579  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:27.752604  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:27.752617  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:27.777612  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:27.777647  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.305694  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:30.316225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:30.316348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:30.339916  546345 cri.go:89] found id: ""
	I1202 22:31:30.339950  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.339959  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:30.339974  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:30.340052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:30.369549  546345 cri.go:89] found id: ""
	I1202 22:31:30.369575  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.369584  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:30.369590  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:30.369677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:30.394634  546345 cri.go:89] found id: ""
	I1202 22:31:30.394711  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.394734  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:30.394749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:30.394830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:30.419244  546345 cri.go:89] found id: ""
	I1202 22:31:30.419271  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.419279  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:30.419286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:30.419344  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:30.447382  546345 cri.go:89] found id: ""
	I1202 22:31:30.447414  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.447423  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:30.447430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:30.447530  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:30.471131  546345 cri.go:89] found id: ""
	I1202 22:31:30.471155  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.471163  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:30.471170  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:30.471236  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:30.496091  546345 cri.go:89] found id: ""
	I1202 22:31:30.496116  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.496125  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:30.496132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:30.496209  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:30.520739  546345 cri.go:89] found id: ""
	I1202 22:31:30.520767  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.520775  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:30.520785  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:30.520796  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:30.549966  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:30.550055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.602152  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:30.602176  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:30.668135  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:30.668172  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:30.683585  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:30.683653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:30.747838  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:33.249502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:33.259480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:33.259551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:33.282766  546345 cri.go:89] found id: ""
	I1202 22:31:33.282791  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.282799  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:33.282806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:33.282866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:33.308495  546345 cri.go:89] found id: ""
	I1202 22:31:33.308518  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.308533  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:33.308540  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:33.308597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:33.331979  546345 cri.go:89] found id: ""
	I1202 22:31:33.332013  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.332023  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:33.332030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:33.332100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:33.356278  546345 cri.go:89] found id: ""
	I1202 22:31:33.356304  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.356313  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:33.356319  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:33.356378  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:33.384857  546345 cri.go:89] found id: ""
	I1202 22:31:33.384885  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.384893  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:33.384900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:33.384959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:33.409699  546345 cri.go:89] found id: ""
	I1202 22:31:33.409727  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.409735  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:33.409742  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:33.409818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:33.433952  546345 cri.go:89] found id: ""
	I1202 22:31:33.433976  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.433984  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:33.433991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:33.434048  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:33.457225  546345 cri.go:89] found id: ""
	I1202 22:31:33.457250  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.457265  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:33.457274  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:33.457286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:33.481072  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:33.481106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:33.513367  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:33.513402  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:33.575454  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:33.575500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:33.611865  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:33.611895  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:33.687837  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.188106  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:36.198524  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:36.198595  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:36.227262  546345 cri.go:89] found id: ""
	I1202 22:31:36.227286  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.227294  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:36.227301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:36.227364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:36.251229  546345 cri.go:89] found id: ""
	I1202 22:31:36.251254  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.251262  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:36.251269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:36.251328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:36.280094  546345 cri.go:89] found id: ""
	I1202 22:31:36.280118  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.280128  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:36.280135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:36.280192  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:36.303557  546345 cri.go:89] found id: ""
	I1202 22:31:36.303589  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.303598  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:36.303606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:36.303680  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:36.328036  546345 cri.go:89] found id: ""
	I1202 22:31:36.328099  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.328110  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:36.328117  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:36.328210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:36.352844  546345 cri.go:89] found id: ""
	I1202 22:31:36.352919  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.352942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:36.352963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:36.353076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:36.377059  546345 cri.go:89] found id: ""
	I1202 22:31:36.377123  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.377148  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:36.377169  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:36.377299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:36.406912  546345 cri.go:89] found id: ""
	I1202 22:31:36.406939  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.406947  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:36.406957  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:36.406969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:36.462620  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:36.462655  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:36.478602  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:36.478633  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:36.553409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.553440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:36.553453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:36.605527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:36.605567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.147765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:39.158330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:39.158399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:39.184185  546345 cri.go:89] found id: ""
	I1202 22:31:39.184211  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.184220  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:39.184227  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:39.184286  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:39.211366  546345 cri.go:89] found id: ""
	I1202 22:31:39.211390  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.211399  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:39.211405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:39.211465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:39.239810  546345 cri.go:89] found id: ""
	I1202 22:31:39.239836  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.239846  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:39.239853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:39.239914  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:39.264259  546345 cri.go:89] found id: ""
	I1202 22:31:39.264285  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.264294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:39.264300  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:39.264357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:39.288356  546345 cri.go:89] found id: ""
	I1202 22:31:39.288384  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.288394  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:39.288400  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:39.288459  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:39.312721  546345 cri.go:89] found id: ""
	I1202 22:31:39.312745  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.312754  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:39.312760  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:39.312817  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:39.337724  546345 cri.go:89] found id: ""
	I1202 22:31:39.337748  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.337756  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:39.337762  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:39.337821  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:39.362280  546345 cri.go:89] found id: ""
	I1202 22:31:39.362303  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.362311  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:39.362320  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:39.362332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.389401  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:39.389425  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:39.449427  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:39.449471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:39.464867  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:39.464897  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:39.527654  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:39.527675  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:39.527691  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.058126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:42.070220  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:42.070305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:42.113161  546345 cri.go:89] found id: ""
	I1202 22:31:42.113187  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.113197  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:42.113205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:42.113279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:42.151146  546345 cri.go:89] found id: ""
	I1202 22:31:42.151178  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.151188  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:42.151195  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:42.151267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:42.187923  546345 cri.go:89] found id: ""
	I1202 22:31:42.187951  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.187960  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:42.187968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:42.188040  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:42.222980  546345 cri.go:89] found id: ""
	I1202 22:31:42.223003  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.223012  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:42.223020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:42.223088  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:42.271018  546345 cri.go:89] found id: ""
	I1202 22:31:42.271046  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.271056  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:42.271064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:42.271136  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:42.302817  546345 cri.go:89] found id: ""
	I1202 22:31:42.302893  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.302913  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:42.302929  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:42.303020  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:42.333498  546345 cri.go:89] found id: ""
	I1202 22:31:42.333526  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.333535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:42.333543  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:42.333630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:42.363457  546345 cri.go:89] found id: ""
	I1202 22:31:42.363485  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.363495  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:42.363505  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:42.363518  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:42.421844  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:42.421883  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:42.439113  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:42.439145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:42.506768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:42.506791  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:42.506803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.531455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:42.531491  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:45.076035  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:45.089323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:45.089414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:45.121410  546345 cri.go:89] found id: ""
	I1202 22:31:45.121436  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.121445  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:45.121454  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:45.121523  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:45.158421  546345 cri.go:89] found id: ""
	I1202 22:31:45.158452  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.158461  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:45.158840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:45.158933  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:45.226744  546345 cri.go:89] found id: ""
	I1202 22:31:45.226769  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.226778  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:45.226785  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:45.226855  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:45.277464  546345 cri.go:89] found id: ""
	I1202 22:31:45.277540  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.277560  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:45.277573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:45.277920  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:45.321564  546345 cri.go:89] found id: ""
	I1202 22:31:45.321591  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.321600  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:45.321607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:45.321695  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:45.349203  546345 cri.go:89] found id: ""
	I1202 22:31:45.349228  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.349236  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:45.349243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:45.349302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:45.377967  546345 cri.go:89] found id: ""
	I1202 22:31:45.377993  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.378001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:45.378009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:45.378068  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:45.404655  546345 cri.go:89] found id: ""
	I1202 22:31:45.404680  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.404689  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:45.404697  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:45.404709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:45.459390  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:45.459424  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:45.474938  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:45.474964  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:45.551857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:45.551880  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:45.551893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:45.599545  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:45.599577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.142376  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:48.152835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:48.152910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:48.176887  546345 cri.go:89] found id: ""
	I1202 22:31:48.176913  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.176921  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:48.176928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:48.176992  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:48.199841  546345 cri.go:89] found id: ""
	I1202 22:31:48.199865  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.199873  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:48.199879  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:48.199937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:48.223323  546345 cri.go:89] found id: ""
	I1202 22:31:48.223346  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.223354  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:48.223361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:48.223419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:48.246053  546345 cri.go:89] found id: ""
	I1202 22:31:48.246079  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.246088  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:48.246095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:48.246152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:48.269713  546345 cri.go:89] found id: ""
	I1202 22:31:48.269739  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.269748  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:48.269755  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:48.269811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:48.295336  546345 cri.go:89] found id: ""
	I1202 22:31:48.295359  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.295368  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:48.295374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:48.295435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:48.318964  546345 cri.go:89] found id: ""
	I1202 22:31:48.318989  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.319001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:48.319009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:48.319114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:48.342776  546345 cri.go:89] found id: ""
	I1202 22:31:48.342803  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.342812  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:48.342821  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:48.342834  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:48.366473  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:48.366507  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.397880  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:48.397907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:48.453030  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:48.453066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:48.468428  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:48.468455  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:48.530252  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.030539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:51.041072  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:51.041139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:51.064958  546345 cri.go:89] found id: ""
	I1202 22:31:51.064986  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.064994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:51.065004  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:51.065074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:51.090247  546345 cri.go:89] found id: ""
	I1202 22:31:51.090275  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.090284  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:51.090290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:51.090356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:51.117187  546345 cri.go:89] found id: ""
	I1202 22:31:51.117224  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.117235  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:51.117242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:51.117326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:51.143456  546345 cri.go:89] found id: ""
	I1202 22:31:51.143483  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.143492  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:51.143499  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:51.143563  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:51.169463  546345 cri.go:89] found id: ""
	I1202 22:31:51.169542  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.169565  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:51.169587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:51.169719  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:51.195977  546345 cri.go:89] found id: ""
	I1202 22:31:51.196019  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.196028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:51.196035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:51.196105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:51.221006  546345 cri.go:89] found id: ""
	I1202 22:31:51.221030  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.221045  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:51.221051  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:51.221119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:51.245434  546345 cri.go:89] found id: ""
	I1202 22:31:51.245457  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.245466  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:51.245475  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:51.245486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:51.273171  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:51.273198  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:51.328523  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:51.328562  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:51.344211  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:51.344238  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:51.405812  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.405843  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:51.405859  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:53.930346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:53.940572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:53.940646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:53.968500  546345 cri.go:89] found id: ""
	I1202 22:31:53.968531  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.968540  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:53.968547  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:53.968605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:53.993271  546345 cri.go:89] found id: ""
	I1202 22:31:53.993298  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.993306  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:53.993314  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:53.993372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:54.020928  546345 cri.go:89] found id: ""
	I1202 22:31:54.020956  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.020965  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:54.020973  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:54.021039  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:54.047236  546345 cri.go:89] found id: ""
	I1202 22:31:54.047260  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.047269  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:54.047276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:54.047336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:54.072186  546345 cri.go:89] found id: ""
	I1202 22:31:54.072219  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.072228  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:54.072235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:54.072310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:54.097358  546345 cri.go:89] found id: ""
	I1202 22:31:54.097390  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.097400  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:54.097407  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:54.097484  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:54.122635  546345 cri.go:89] found id: ""
	I1202 22:31:54.122739  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.122765  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:54.122787  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:54.122881  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:54.147140  546345 cri.go:89] found id: ""
	I1202 22:31:54.147205  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.147228  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:54.147244  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:54.147257  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:54.209277  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:54.209298  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:54.209312  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:54.233525  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:54.233564  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:54.267595  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:54.267623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:54.322957  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:54.322991  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:56.839135  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:56.854872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:56.854954  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:56.883302  546345 cri.go:89] found id: ""
	I1202 22:31:56.883327  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.883335  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:56.883342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:56.883400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:56.909437  546345 cri.go:89] found id: ""
	I1202 22:31:56.909478  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.909495  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:56.909502  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:56.909574  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:56.935567  546345 cri.go:89] found id: ""
	I1202 22:31:56.935592  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.935600  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:56.935607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:56.935700  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:56.962296  546345 cri.go:89] found id: ""
	I1202 22:31:56.962322  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.962339  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:56.962352  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:56.962417  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:56.987308  546345 cri.go:89] found id: ""
	I1202 22:31:56.987333  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.987341  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:56.987348  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:56.987409  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:57.017409  546345 cri.go:89] found id: ""
	I1202 22:31:57.017436  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.017444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:57.017451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:57.017519  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:57.043570  546345 cri.go:89] found id: ""
	I1202 22:31:57.043593  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.043601  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:57.043607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:57.043670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:57.068973  546345 cri.go:89] found id: ""
	I1202 22:31:57.069005  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.069014  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:57.069023  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:57.069034  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:57.093239  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:57.093275  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:57.120751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:57.120777  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:57.176173  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:57.176209  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:57.193001  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:57.193035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:57.259032  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:59.760716  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:59.771290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:59.771364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:59.819477  546345 cri.go:89] found id: ""
	I1202 22:31:59.819507  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.819521  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:59.819528  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:59.819609  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:59.879132  546345 cri.go:89] found id: ""
	I1202 22:31:59.879159  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.879168  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:59.879175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:59.879235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:59.909985  546345 cri.go:89] found id: ""
	I1202 22:31:59.910011  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.910020  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:59.910027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:59.910083  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:59.934326  546345 cri.go:89] found id: ""
	I1202 22:31:59.934350  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.934359  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:59.934366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:59.934424  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:59.963200  546345 cri.go:89] found id: ""
	I1202 22:31:59.963224  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.963233  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:59.963240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:59.963327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:59.989148  546345 cri.go:89] found id: ""
	I1202 22:31:59.989180  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.989190  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:59.989196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:59.989302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:00.074954  546345 cri.go:89] found id: ""
	I1202 22:32:00.075036  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.075063  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:00.075085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:00.075215  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:00.226233  546345 cri.go:89] found id: ""
	I1202 22:32:00.226259  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.226269  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:00.226279  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:00.226293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:00.336324  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:00.336441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:00.371299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:00.371905  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:00.484267  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:00.484297  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:00.484311  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:00.512091  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:00.512128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.068479  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:03.078801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:03.078893  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:03.103732  546345 cri.go:89] found id: ""
	I1202 22:32:03.103758  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.103766  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:03.103773  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:03.103832  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:03.128397  546345 cri.go:89] found id: ""
	I1202 22:32:03.128426  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.128435  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:03.128441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:03.128501  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:03.153803  546345 cri.go:89] found id: ""
	I1202 22:32:03.153877  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.153899  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:03.153913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:03.153988  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:03.181014  546345 cri.go:89] found id: ""
	I1202 22:32:03.181038  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.181047  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:03.181053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:03.181152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:03.210807  546345 cri.go:89] found id: ""
	I1202 22:32:03.210834  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.210843  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:03.210850  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:03.210911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:03.239226  546345 cri.go:89] found id: ""
	I1202 22:32:03.239251  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.239260  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:03.239267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:03.239326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:03.263944  546345 cri.go:89] found id: ""
	I1202 22:32:03.263969  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.263978  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:03.263984  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:03.264044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:03.287558  546345 cri.go:89] found id: ""
	I1202 22:32:03.287583  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.287592  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:03.287601  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:03.287612  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:03.311743  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:03.311776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.343056  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:03.343083  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:03.397595  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:03.397629  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:03.413119  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:03.413155  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:03.475280  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:05.975590  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:05.985554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:05.985622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:06.019132  546345 cri.go:89] found id: ""
	I1202 22:32:06.019157  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.019166  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:06.019173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:06.019241  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:06.044254  546345 cri.go:89] found id: ""
	I1202 22:32:06.044277  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.044286  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:06.044293  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:06.044357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:06.073518  546345 cri.go:89] found id: ""
	I1202 22:32:06.073541  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.073550  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:06.073556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:06.073619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:06.103333  546345 cri.go:89] found id: ""
	I1202 22:32:06.103400  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.103431  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:06.103450  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:06.103539  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:06.129000  546345 cri.go:89] found id: ""
	I1202 22:32:06.129036  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.129051  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:06.129058  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:06.129128  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:06.155243  546345 cri.go:89] found id: ""
	I1202 22:32:06.155266  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.155274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:06.155281  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:06.155341  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:06.183834  546345 cri.go:89] found id: ""
	I1202 22:32:06.183900  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.183923  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:06.183942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:06.184033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:06.208508  546345 cri.go:89] found id: ""
	I1202 22:32:06.208546  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.208556  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:06.208566  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:06.208578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:06.265928  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:06.265966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:06.281782  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:06.281811  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:06.341568  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:06.341591  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:06.341603  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:06.366403  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:06.366435  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:08.899765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:08.910234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:08.910306  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:08.940951  546345 cri.go:89] found id: ""
	I1202 22:32:08.940979  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.940989  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:08.940995  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:08.941054  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:08.966172  546345 cri.go:89] found id: ""
	I1202 22:32:08.966198  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.966207  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:08.966214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:08.966274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:08.990534  546345 cri.go:89] found id: ""
	I1202 22:32:08.990561  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.990569  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:08.990576  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:08.990633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:09.016942  546345 cri.go:89] found id: ""
	I1202 22:32:09.016970  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.016979  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:09.016986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:09.017052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:09.040852  546345 cri.go:89] found id: ""
	I1202 22:32:09.040893  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.040902  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:09.040909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:09.040978  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:09.064884  546345 cri.go:89] found id: ""
	I1202 22:32:09.064958  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.064986  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:09.065005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:09.065114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:09.088807  546345 cri.go:89] found id: ""
	I1202 22:32:09.088878  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.088903  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:09.088922  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:09.089011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:09.115024  546345 cri.go:89] found id: ""
	I1202 22:32:09.115051  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.115060  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:09.115069  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:09.115080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:09.138651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:09.138687  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:09.165425  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:09.165449  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:09.222720  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:09.222752  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:09.238413  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:09.238441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:09.299159  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:11.799390  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:11.813803  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:11.813890  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:11.853262  546345 cri.go:89] found id: ""
	I1202 22:32:11.853298  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.853311  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:11.853318  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:11.853394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:11.898451  546345 cri.go:89] found id: ""
	I1202 22:32:11.898474  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.898482  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:11.898489  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:11.898549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:11.926743  546345 cri.go:89] found id: ""
	I1202 22:32:11.926817  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.926840  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:11.926860  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:11.926980  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:11.950985  546345 cri.go:89] found id: ""
	I1202 22:32:11.951011  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.951019  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:11.951027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:11.951106  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:11.975373  546345 cri.go:89] found id: ""
	I1202 22:32:11.975399  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.975407  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:11.975414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:11.975490  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:12.005482  546345 cri.go:89] found id: ""
	I1202 22:32:12.005511  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.005521  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:12.005529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:12.005643  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:12.032572  546345 cri.go:89] found id: ""
	I1202 22:32:12.032597  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.032607  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:12.032634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:12.032733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:12.059401  546345 cri.go:89] found id: ""
	I1202 22:32:12.059476  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.059492  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:12.059504  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:12.059517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:12.093142  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:12.093179  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:12.150021  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:12.150054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:12.165956  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:12.165987  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:12.231857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:12.231929  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:12.231956  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:14.756725  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:14.767263  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:14.767333  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:14.801672  546345 cri.go:89] found id: ""
	I1202 22:32:14.801697  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.801706  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:14.801713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:14.801770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:14.851488  546345 cri.go:89] found id: ""
	I1202 22:32:14.851517  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.851532  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:14.851538  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:14.851605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:14.888023  546345 cri.go:89] found id: ""
	I1202 22:32:14.888048  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.888057  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:14.888064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:14.888129  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:14.916001  546345 cri.go:89] found id: ""
	I1202 22:32:14.916053  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.916061  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:14.916068  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:14.916135  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:14.942133  546345 cri.go:89] found id: ""
	I1202 22:32:14.942199  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.942222  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:14.942240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:14.942326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:14.967663  546345 cri.go:89] found id: ""
	I1202 22:32:14.967694  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.967702  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:14.967710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:14.967779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:14.997283  546345 cri.go:89] found id: ""
	I1202 22:32:14.997360  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.997398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:14.997424  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:14.997514  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:15.028362  546345 cri.go:89] found id: ""
	I1202 22:32:15.028443  546345 logs.go:282] 0 containers: []
	W1202 22:32:15.028481  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:15.028510  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:15.028577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:15.084989  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:15.085026  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:15.101099  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:15.101135  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:15.163640  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:15.163661  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:15.163673  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:15.188815  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:15.188850  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:17.720502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:17.730835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:17.730906  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:17.754959  546345 cri.go:89] found id: ""
	I1202 22:32:17.754985  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.754994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:17.755001  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:17.755058  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:17.779124  546345 cri.go:89] found id: ""
	I1202 22:32:17.779145  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.779153  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:17.779159  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:17.779216  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:17.861624  546345 cri.go:89] found id: ""
	I1202 22:32:17.861647  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.861670  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:17.861676  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:17.861733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:17.891578  546345 cri.go:89] found id: ""
	I1202 22:32:17.891604  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.891612  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:17.891620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:17.891677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:17.914983  546345 cri.go:89] found id: ""
	I1202 22:32:17.915005  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.915013  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:17.915019  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:17.915075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:17.938893  546345 cri.go:89] found id: ""
	I1202 22:32:17.938923  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.938932  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:17.938939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:17.938997  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:17.964896  546345 cri.go:89] found id: ""
	I1202 22:32:17.964960  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.964983  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:17.964997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:17.965076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:17.988828  546345 cri.go:89] found id: ""
	I1202 22:32:17.988863  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.988872  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:17.988882  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:17.988893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:18.022032  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:18.022059  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:18.077598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:18.077635  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:18.095143  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:18.095184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:18.157395  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:18.157426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:18.157439  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:20.681946  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:20.692713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:20.692790  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:20.716255  546345 cri.go:89] found id: ""
	I1202 22:32:20.716281  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.716290  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:20.716297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:20.716355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:20.743603  546345 cri.go:89] found id: ""
	I1202 22:32:20.743629  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.743638  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:20.743645  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:20.743705  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:20.768770  546345 cri.go:89] found id: ""
	I1202 22:32:20.768798  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.768807  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:20.768814  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:20.768878  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:20.805921  546345 cri.go:89] found id: ""
	I1202 22:32:20.805945  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.805954  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:20.805960  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:20.806018  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:20.882456  546345 cri.go:89] found id: ""
	I1202 22:32:20.882478  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.882486  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:20.882493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:20.882548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:20.906709  546345 cri.go:89] found id: ""
	I1202 22:32:20.906732  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.906740  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:20.906747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:20.906803  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:20.930871  546345 cri.go:89] found id: ""
	I1202 22:32:20.930947  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.930970  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:20.930985  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:20.931072  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:20.954799  546345 cri.go:89] found id: ""
	I1202 22:32:20.954823  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.954832  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:20.954841  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:20.954853  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:20.982221  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:20.982253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:21.038726  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:21.038763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:21.054186  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:21.054213  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:21.118780  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:21.118846  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:21.118868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.643583  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:23.655825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:23.655896  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:23.680044  546345 cri.go:89] found id: ""
	I1202 22:32:23.680070  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.680079  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:23.680085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:23.680143  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:23.708984  546345 cri.go:89] found id: ""
	I1202 22:32:23.709009  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.709017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:23.709024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:23.709082  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:23.734044  546345 cri.go:89] found id: ""
	I1202 22:32:23.734068  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.734076  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:23.734082  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:23.734142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:23.763083  546345 cri.go:89] found id: ""
	I1202 22:32:23.763110  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.763118  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:23.763125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:23.763183  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:23.809231  546345 cri.go:89] found id: ""
	I1202 22:32:23.809254  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.809262  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:23.809269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:23.809328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:23.877561  546345 cri.go:89] found id: ""
	I1202 22:32:23.877585  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.877593  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:23.877600  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:23.877685  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:23.900843  546345 cri.go:89] found id: ""
	I1202 22:32:23.900870  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.900879  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:23.900885  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:23.900948  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:23.926458  546345 cri.go:89] found id: ""
	I1202 22:32:23.926497  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.926506  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:23.926515  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:23.926526  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.951259  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:23.951296  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:23.979352  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:23.979421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:24.036927  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:24.036965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:24.052889  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:24.052925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:24.114973  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.615216  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:26.625455  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:26.625533  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:26.651390  546345 cri.go:89] found id: ""
	I1202 22:32:26.651423  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.651432  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:26.651439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:26.651508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:26.677027  546345 cri.go:89] found id: ""
	I1202 22:32:26.677052  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.677060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:26.677067  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:26.677127  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:26.706368  546345 cri.go:89] found id: ""
	I1202 22:32:26.706391  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.706400  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:26.706406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:26.706469  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:26.730421  546345 cri.go:89] found id: ""
	I1202 22:32:26.730445  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.730453  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:26.730460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:26.730525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:26.754523  546345 cri.go:89] found id: ""
	I1202 22:32:26.754552  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.754561  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:26.754569  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:26.754633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:26.779516  546345 cri.go:89] found id: ""
	I1202 22:32:26.779545  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.779554  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:26.779568  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:26.779632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:26.823212  546345 cri.go:89] found id: ""
	I1202 22:32:26.823237  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.823246  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:26.823253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:26.823313  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:26.858245  546345 cri.go:89] found id: ""
	I1202 22:32:26.858282  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.858291  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:26.858300  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:26.858313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:26.917465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:26.917500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:26.933252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:26.933281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:26.995404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.995426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:26.995438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:27.021457  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:27.021490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.552148  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:29.562514  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:29.562594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:29.587012  546345 cri.go:89] found id: ""
	I1202 22:32:29.587037  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.587046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:29.587079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:29.587163  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:29.613219  546345 cri.go:89] found id: ""
	I1202 22:32:29.613246  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.613254  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:29.613261  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:29.613321  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:29.638585  546345 cri.go:89] found id: ""
	I1202 22:32:29.638611  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.638619  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:29.638626  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:29.638682  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:29.663132  546345 cri.go:89] found id: ""
	I1202 22:32:29.663208  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.663225  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:29.663232  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:29.663304  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:29.686925  546345 cri.go:89] found id: ""
	I1202 22:32:29.686947  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.686955  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:29.686961  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:29.687021  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:29.711947  546345 cri.go:89] found id: ""
	I1202 22:32:29.711971  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.711979  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:29.711986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:29.712047  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:29.735873  546345 cri.go:89] found id: ""
	I1202 22:32:29.735940  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.735962  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:29.735988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:29.736071  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:29.764629  546345 cri.go:89] found id: ""
	I1202 22:32:29.764655  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.764664  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:29.764674  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:29.764685  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:29.789251  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:29.789289  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.859060  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:29.859085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:29.927618  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:29.927653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:29.944397  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:29.944477  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:30.015300  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.515559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:32.525887  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:32.525957  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:32.549813  546345 cri.go:89] found id: ""
	I1202 22:32:32.549848  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.549857  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:32.549865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:32.549931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:32.575230  546345 cri.go:89] found id: ""
	I1202 22:32:32.575253  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.575261  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:32.575268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:32.575359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:32.600349  546345 cri.go:89] found id: ""
	I1202 22:32:32.600374  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.600382  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:32.600389  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:32.600448  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:32.629053  546345 cri.go:89] found id: ""
	I1202 22:32:32.629078  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.629086  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:32.629095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:32.629152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:32.653727  546345 cri.go:89] found id: ""
	I1202 22:32:32.653750  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.653759  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:32.653766  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:32.653824  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:32.677981  546345 cri.go:89] found id: ""
	I1202 22:32:32.678019  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.678028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:32.678035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:32.678101  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:32.702199  546345 cri.go:89] found id: ""
	I1202 22:32:32.702222  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.702230  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:32.702237  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:32.702294  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:32.725924  546345 cri.go:89] found id: ""
	I1202 22:32:32.725957  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.725967  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:32.725976  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:32.726002  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:32.779589  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:32.779623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:32.807508  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:32.807541  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:32.902366  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.902386  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:32.902399  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:32.925648  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:32.925948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:35.456822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:35.467636  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:35.467796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:35.496302  546345 cri.go:89] found id: ""
	I1202 22:32:35.496328  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.496337  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:35.496343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:35.496407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:35.525080  546345 cri.go:89] found id: ""
	I1202 22:32:35.525107  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.525116  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:35.525122  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:35.525187  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:35.549407  546345 cri.go:89] found id: ""
	I1202 22:32:35.549432  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.549441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:35.549447  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:35.549505  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:35.574018  546345 cri.go:89] found id: ""
	I1202 22:32:35.574040  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.574049  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:35.574056  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:35.574115  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:35.604104  546345 cri.go:89] found id: ""
	I1202 22:32:35.604128  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.604137  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:35.604143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:35.604201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:35.629312  546345 cri.go:89] found id: ""
	I1202 22:32:35.629346  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.629355  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:35.629361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:35.629427  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:35.653959  546345 cri.go:89] found id: ""
	I1202 22:32:35.653987  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.653996  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:35.654003  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:35.654064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:35.678225  546345 cri.go:89] found id: ""
	I1202 22:32:35.678301  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.678325  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:35.678343  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:35.678368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:35.733851  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:35.733884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:35.749526  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:35.749554  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:35.844900  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:35.844925  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:35.844940  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:35.882135  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:35.882168  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:38.412949  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:38.423327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:38.423399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:38.447069  546345 cri.go:89] found id: ""
	I1202 22:32:38.447097  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.447107  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:38.447148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:38.447205  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:38.473526  546345 cri.go:89] found id: ""
	I1202 22:32:38.473549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.473558  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:38.473565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:38.473626  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:38.501943  546345 cri.go:89] found id: ""
	I1202 22:32:38.501974  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.501984  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:38.501990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:38.502049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:38.526634  546345 cri.go:89] found id: ""
	I1202 22:32:38.526657  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.526666  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:38.526672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:38.526730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:38.555523  546345 cri.go:89] found id: ""
	I1202 22:32:38.555549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.555558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:38.555564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:38.555622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:38.579778  546345 cri.go:89] found id: ""
	I1202 22:32:38.579804  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.579812  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:38.579819  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:38.579875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:38.605528  546345 cri.go:89] found id: ""
	I1202 22:32:38.605589  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.605613  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:38.605633  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:38.605733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:38.629391  546345 cri.go:89] found id: ""
	I1202 22:32:38.629412  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.629421  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:38.629429  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:38.629441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:38.684729  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:38.684763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:38.699841  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:38.699916  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:38.767359  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:38.767378  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:38.767391  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:38.792073  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:38.792104  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.385000  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:41.395673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:41.395741  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:41.420534  546345 cri.go:89] found id: ""
	I1202 22:32:41.420574  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.420586  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:41.420593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:41.420652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:41.445534  546345 cri.go:89] found id: ""
	I1202 22:32:41.445559  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.445567  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:41.445573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:41.445635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:41.470438  546345 cri.go:89] found id: ""
	I1202 22:32:41.470463  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.470473  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:41.470481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:41.470551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:41.495013  546345 cri.go:89] found id: ""
	I1202 22:32:41.495037  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.495045  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:41.495052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:41.495139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:41.520340  546345 cri.go:89] found id: ""
	I1202 22:32:41.520375  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.520385  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:41.520392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:41.520488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:41.545599  546345 cri.go:89] found id: ""
	I1202 22:32:41.545633  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.545642  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:41.545649  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:41.545753  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:41.570203  546345 cri.go:89] found id: ""
	I1202 22:32:41.570227  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.570235  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:41.570241  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:41.570317  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:41.595416  546345 cri.go:89] found id: ""
	I1202 22:32:41.595442  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.595451  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:41.595461  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:41.595493  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.622428  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:41.622456  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:41.678602  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:41.678634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:41.694624  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:41.694654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:41.757051  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:41.757072  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:41.757085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.281854  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:44.292430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:44.292510  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:44.317241  546345 cri.go:89] found id: ""
	I1202 22:32:44.317271  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.317279  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:44.317286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:44.317350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:44.341824  546345 cri.go:89] found id: ""
	I1202 22:32:44.341849  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.341857  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:44.341865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:44.341926  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:44.366036  546345 cri.go:89] found id: ""
	I1202 22:32:44.366061  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.366070  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:44.366077  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:44.366139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:44.391175  546345 cri.go:89] found id: ""
	I1202 22:32:44.391200  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.391209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:44.391216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:44.391292  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:44.420090  546345 cri.go:89] found id: ""
	I1202 22:32:44.420123  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.420132  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:44.420155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:44.420234  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:44.444490  546345 cri.go:89] found id: ""
	I1202 22:32:44.444540  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.444549  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:44.444557  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:44.444612  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:44.470392  546345 cri.go:89] found id: ""
	I1202 22:32:44.470419  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.470427  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:44.470434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:44.470493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:44.495601  546345 cri.go:89] found id: ""
	I1202 22:32:44.495624  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.495633  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:44.495664  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:44.495690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:44.549795  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:44.549886  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:44.567082  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:44.567110  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:44.632540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:44.632570  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:44.632582  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.657144  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:44.657180  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.185793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.196271  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:47.196339  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:47.226550  546345 cri.go:89] found id: ""
	I1202 22:32:47.226572  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.226581  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:47.226588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:47.226645  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:47.250706  546345 cri.go:89] found id: ""
	I1202 22:32:47.250732  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.250741  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:47.250748  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:47.250811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:47.280047  546345 cri.go:89] found id: ""
	I1202 22:32:47.280072  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.280081  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:47.280088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:47.280154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:47.306607  546345 cri.go:89] found id: ""
	I1202 22:32:47.306633  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.306642  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:47.306651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:47.306718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:47.330953  546345 cri.go:89] found id: ""
	I1202 22:32:47.331024  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.331038  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:47.331045  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:47.331105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:47.360182  546345 cri.go:89] found id: ""
	I1202 22:32:47.360206  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.360215  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:47.360222  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:47.360293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:47.388010  546345 cri.go:89] found id: ""
	I1202 22:32:47.388032  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.388041  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:47.388048  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:47.388114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:47.415262  546345 cri.go:89] found id: ""
	I1202 22:32:47.415294  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.415303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:47.415312  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:47.415326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:47.433260  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:47.433288  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:47.497337  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:47.497366  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:47.497378  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:47.521722  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:47.521801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.548995  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:47.549027  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:50.107291  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.119155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:50.119230  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:50.144228  546345 cri.go:89] found id: ""
	I1202 22:32:50.144252  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.144261  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:50.144268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:50.144329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:50.172928  546345 cri.go:89] found id: ""
	I1202 22:32:50.172951  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.172959  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:50.172966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:50.173027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:50.201752  546345 cri.go:89] found id: ""
	I1202 22:32:50.201795  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.201804  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:50.201811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:50.201873  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:50.225118  546345 cri.go:89] found id: ""
	I1202 22:32:50.225139  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.225148  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:50.225154  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:50.225217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:50.251396  546345 cri.go:89] found id: ""
	I1202 22:32:50.251421  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.251430  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:50.251437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:50.251495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:50.278859  546345 cri.go:89] found id: ""
	I1202 22:32:50.278887  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.278896  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:50.278903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:50.278961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:50.302858  546345 cri.go:89] found id: ""
	I1202 22:32:50.302891  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.302900  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:50.302907  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:50.302972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:50.330618  546345 cri.go:89] found id: ""
	I1202 22:32:50.330642  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.330650  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:50.330659  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:50.330670  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:50.347121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:50.347147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:50.414460  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:50.414482  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:50.414496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:50.438651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:50.438682  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:50.466506  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:50.466532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.022126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.032606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:53.032678  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:53.068046  546345 cri.go:89] found id: ""
	I1202 22:32:53.068078  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.068088  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:53.068095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:53.068154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:53.130393  546345 cri.go:89] found id: ""
	I1202 22:32:53.130414  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.130423  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:53.130429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:53.130488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:53.156458  546345 cri.go:89] found id: ""
	I1202 22:32:53.156481  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.156498  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:53.156504  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:53.156564  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:53.180994  546345 cri.go:89] found id: ""
	I1202 22:32:53.181067  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.181090  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:53.181110  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:53.181196  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:53.204951  546345 cri.go:89] found id: ""
	I1202 22:32:53.204976  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.204985  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:53.204993  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:53.205053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:53.232863  546345 cri.go:89] found id: ""
	I1202 22:32:53.232896  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.232905  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:53.232912  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:53.232981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:53.263355  546345 cri.go:89] found id: ""
	I1202 22:32:53.263381  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.263390  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:53.263396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:53.263454  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:53.288048  546345 cri.go:89] found id: ""
	I1202 22:32:53.288074  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.288082  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:53.288092  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:53.288103  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.343380  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:53.343416  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:53.359279  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:53.359304  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:53.426667  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:53.426690  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:53.426703  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:53.451602  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:53.451640  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:55.979195  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.989644  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:55.989738  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:56.016824  546345 cri.go:89] found id: ""
	I1202 22:32:56.016857  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.016866  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:56.016873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:56.016939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:56.061785  546345 cri.go:89] found id: ""
	I1202 22:32:56.061833  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.061846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:56.061854  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:56.061938  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:56.099312  546345 cri.go:89] found id: ""
	I1202 22:32:56.099341  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.099351  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:56.099359  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:56.099422  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:56.133179  546345 cri.go:89] found id: ""
	I1202 22:32:56.133209  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.133217  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:56.133224  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:56.133285  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:56.166397  546345 cri.go:89] found id: ""
	I1202 22:32:56.166420  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.166429  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:56.166435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:56.166493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:56.191240  546345 cri.go:89] found id: ""
	I1202 22:32:56.191300  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.191323  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:56.191343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:56.191406  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:56.219940  546345 cri.go:89] found id: ""
	I1202 22:32:56.219966  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.219975  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:56.219982  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:56.220042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:56.245089  546345 cri.go:89] found id: ""
	I1202 22:32:56.245116  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.245125  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:56.245134  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:56.245145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:56.275969  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:56.275995  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:56.330353  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:56.330388  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:56.346262  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:56.346293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:56.411285  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:56.411307  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:56.411320  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:58.937516  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.947690  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:58.947760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:58.971175  546345 cri.go:89] found id: ""
	I1202 22:32:58.971209  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.971221  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:58.971229  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:58.971289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:58.995437  546345 cri.go:89] found id: ""
	I1202 22:32:58.995465  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.995474  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:58.995481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:58.995538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:59.021289  546345 cri.go:89] found id: ""
	I1202 22:32:59.021315  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.021323  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:59.021329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:59.021388  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:59.067648  546345 cri.go:89] found id: ""
	I1202 22:32:59.067676  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.067684  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:59.067691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:59.067752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:59.120318  546345 cri.go:89] found id: ""
	I1202 22:32:59.120353  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.120362  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:59.120369  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:59.120435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:59.147811  546345 cri.go:89] found id: ""
	I1202 22:32:59.147845  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.147855  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:59.147862  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:59.147929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:59.176414  546345 cri.go:89] found id: ""
	I1202 22:32:59.176448  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.176456  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:59.176463  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:59.176534  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:59.202001  546345 cri.go:89] found id: ""
	I1202 22:32:59.202027  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.202035  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:59.202045  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:59.202056  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:59.257545  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:59.257581  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:59.273305  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:59.273385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:59.335480  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:59.335501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:59.335514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:59.359981  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:59.360017  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:01.886549  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.897148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:01.897222  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:01.923194  546345 cri.go:89] found id: ""
	I1202 22:33:01.923220  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.923229  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:01.923236  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:01.923295  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:01.947898  546345 cri.go:89] found id: ""
	I1202 22:33:01.947922  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.947930  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:01.947937  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:01.947996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:01.977128  546345 cri.go:89] found id: ""
	I1202 22:33:01.977153  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.977161  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:01.977167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:01.977226  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:02.004541  546345 cri.go:89] found id: ""
	I1202 22:33:02.004569  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.004578  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:02.004586  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:02.004660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:02.034163  546345 cri.go:89] found id: ""
	I1202 22:33:02.034189  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.034199  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:02.034206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:02.034302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:02.103583  546345 cri.go:89] found id: ""
	I1202 22:33:02.103619  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.103628  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:02.103651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:02.103732  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:02.141546  546345 cri.go:89] found id: ""
	I1202 22:33:02.141581  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.141590  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:02.141597  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:02.141672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:02.166780  546345 cri.go:89] found id: ""
	I1202 22:33:02.166805  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.166815  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:02.166824  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:02.166835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:02.191150  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:02.191186  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:02.222079  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:02.222108  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:02.279420  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:02.279453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:02.295466  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:02.295494  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:02.371035  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:04.872723  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.882988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:04.883064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:04.906907  546345 cri.go:89] found id: ""
	I1202 22:33:04.906931  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.906940  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:04.906947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:04.907006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:04.931077  546345 cri.go:89] found id: ""
	I1202 22:33:04.931102  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.931111  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:04.931119  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:04.931176  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:04.954232  546345 cri.go:89] found id: ""
	I1202 22:33:04.954258  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.954266  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:04.954273  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:04.954332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:04.978316  546345 cri.go:89] found id: ""
	I1202 22:33:04.978339  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.978347  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:04.978354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:04.978412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:05.008227  546345 cri.go:89] found id: ""
	I1202 22:33:05.008253  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.008261  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:05.008269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:05.008401  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:05.037911  546345 cri.go:89] found id: ""
	I1202 22:33:05.037948  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.037957  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:05.037964  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:05.038041  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:05.115835  546345 cri.go:89] found id: ""
	I1202 22:33:05.115860  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.115869  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:05.115876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:05.115944  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:05.142576  546345 cri.go:89] found id: ""
	I1202 22:33:05.142599  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.142608  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:05.142617  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:05.142628  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:05.172774  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:05.172802  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:05.229451  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:05.229486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:05.245158  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:05.245184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:05.308964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:05.308985  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:05.309000  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:07.834473  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.845693  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:07.845780  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:07.870140  546345 cri.go:89] found id: ""
	I1202 22:33:07.870162  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.870171  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:07.870178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:07.870238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:07.894539  546345 cri.go:89] found id: ""
	I1202 22:33:07.894562  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.894570  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:07.894583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:07.894640  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:07.918644  546345 cri.go:89] found id: ""
	I1202 22:33:07.918672  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.918681  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:07.918688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:07.918751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:07.942273  546345 cri.go:89] found id: ""
	I1202 22:33:07.942296  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.942304  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:07.942310  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:07.942367  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:07.965678  546345 cri.go:89] found id: ""
	I1202 22:33:07.965703  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.965712  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:07.965718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:07.965775  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:07.989455  546345 cri.go:89] found id: ""
	I1202 22:33:07.989480  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.989489  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:07.989496  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:07.989556  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:08.015583  546345 cri.go:89] found id: ""
	I1202 22:33:08.015608  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.015617  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:08.015624  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:08.015686  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:08.068697  546345 cri.go:89] found id: ""
	I1202 22:33:08.068724  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.068734  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:08.068745  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:08.068768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:08.112700  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:08.112750  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:08.148124  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:08.148159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:08.208343  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:08.208384  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:08.224299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:08.224331  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:08.287847  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:10.788102  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.798373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:10.798493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:10.826690  546345 cri.go:89] found id: ""
	I1202 22:33:10.826715  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.826724  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:10.826731  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:10.826791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:10.857739  546345 cri.go:89] found id: ""
	I1202 22:33:10.857765  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.857773  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:10.857780  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:10.857841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:10.886900  546345 cri.go:89] found id: ""
	I1202 22:33:10.886926  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.886935  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:10.886942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:10.887001  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:10.915788  546345 cri.go:89] found id: ""
	I1202 22:33:10.915811  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.915820  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:10.915826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:10.915883  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:10.940846  546345 cri.go:89] found id: ""
	I1202 22:33:10.940869  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.940877  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:10.940883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:10.940942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:10.969358  546345 cri.go:89] found id: ""
	I1202 22:33:10.969380  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.969389  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:10.969396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:10.969452  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:10.994365  546345 cri.go:89] found id: ""
	I1202 22:33:10.994389  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.994398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:10.994405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:10.994488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:11.021354  546345 cri.go:89] found id: ""
	I1202 22:33:11.021376  546345 logs.go:282] 0 containers: []
	W1202 22:33:11.021387  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:11.021396  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:11.021406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:11.096880  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:11.096922  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:11.115249  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:11.115286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:11.192270  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:11.192290  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:11.192305  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:11.216801  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:11.216838  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:13.747802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.758663  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:13.758739  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:13.784138  546345 cri.go:89] found id: ""
	I1202 22:33:13.784160  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.784169  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:13.784175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:13.784242  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:13.810746  546345 cri.go:89] found id: ""
	I1202 22:33:13.810768  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.810777  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:13.810783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:13.810841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:13.834531  546345 cri.go:89] found id: ""
	I1202 22:33:13.834563  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.834571  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:13.834578  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:13.834644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:13.858698  546345 cri.go:89] found id: ""
	I1202 22:33:13.858721  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.858729  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:13.858736  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:13.858798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:13.882726  546345 cri.go:89] found id: ""
	I1202 22:33:13.882749  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.882757  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:13.882764  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:13.882822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:13.908263  546345 cri.go:89] found id: ""
	I1202 22:33:13.908287  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.908296  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:13.908302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:13.908359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:13.933266  546345 cri.go:89] found id: ""
	I1202 22:33:13.933290  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.933298  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:13.933304  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:13.933361  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:13.957668  546345 cri.go:89] found id: ""
	I1202 22:33:13.957738  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.957753  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:13.957764  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:13.957776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:13.983158  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:13.983193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:14.013404  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:14.013434  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:14.076941  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:14.076982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:14.122673  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:14.122701  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:14.186208  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:16.686471  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.697167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:16.697255  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:16.721334  546345 cri.go:89] found id: ""
	I1202 22:33:16.721358  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.721367  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:16.721374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:16.721439  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:16.744849  546345 cri.go:89] found id: ""
	I1202 22:33:16.744875  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.744887  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:16.744893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:16.744950  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:16.768289  546345 cri.go:89] found id: ""
	I1202 22:33:16.768315  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.768324  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:16.768330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:16.768390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:16.793721  546345 cri.go:89] found id: ""
	I1202 22:33:16.793745  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.793754  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:16.793761  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:16.793822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:16.819397  546345 cri.go:89] found id: ""
	I1202 22:33:16.819419  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.819427  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:16.819434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:16.819493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:16.847655  546345 cri.go:89] found id: ""
	I1202 22:33:16.847682  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.847691  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:16.847699  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:16.847779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:16.872502  546345 cri.go:89] found id: ""
	I1202 22:33:16.872527  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.872535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:16.872542  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:16.872605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:16.904922  546345 cri.go:89] found id: ""
	I1202 22:33:16.904953  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.904968  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:16.904978  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:16.904990  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:16.929494  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:16.929529  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:16.960812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:16.960840  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:17.015332  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:17.015369  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:17.031163  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:17.031192  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:17.146404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.646668  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.656904  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:19.656972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:19.681366  546345 cri.go:89] found id: ""
	I1202 22:33:19.681390  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.681397  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:19.681404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:19.681462  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:19.705682  546345 cri.go:89] found id: ""
	I1202 22:33:19.705711  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.705720  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:19.705726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:19.705782  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:19.728889  546345 cri.go:89] found id: ""
	I1202 22:33:19.728913  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.728921  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:19.728928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:19.728986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:19.753177  546345 cri.go:89] found id: ""
	I1202 22:33:19.753200  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.753209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:19.753215  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:19.753275  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:19.777064  546345 cri.go:89] found id: ""
	I1202 22:33:19.777087  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.777095  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:19.777101  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:19.777165  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:19.804440  546345 cri.go:89] found id: ""
	I1202 22:33:19.804462  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.804479  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:19.804487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:19.804544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:19.831370  546345 cri.go:89] found id: ""
	I1202 22:33:19.831395  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.831403  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:19.831409  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:19.831470  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:19.854457  546345 cri.go:89] found id: ""
	I1202 22:33:19.854481  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.854489  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:19.854498  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:19.854512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:19.912020  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:19.912055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:19.927521  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:19.927549  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:19.988124  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.988188  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:19.988211  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:20.013304  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:20.013341  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:22.562705  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.573519  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:22.573597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:22.603465  546345 cri.go:89] found id: ""
	I1202 22:33:22.603541  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.603556  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:22.603564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:22.603670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:22.629949  546345 cri.go:89] found id: ""
	I1202 22:33:22.629976  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.629985  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:22.629991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:22.630051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:22.660760  546345 cri.go:89] found id: ""
	I1202 22:33:22.660785  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.660794  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:22.660801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:22.660861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:22.685501  546345 cri.go:89] found id: ""
	I1202 22:33:22.685531  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.685540  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:22.685555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:22.685618  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:22.712679  546345 cri.go:89] found id: ""
	I1202 22:33:22.712714  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.712723  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:22.712730  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:22.712799  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:22.738275  546345 cri.go:89] found id: ""
	I1202 22:33:22.738301  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.738310  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:22.738317  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:22.738437  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:22.767652  546345 cri.go:89] found id: ""
	I1202 22:33:22.767677  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.767686  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:22.767694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:22.767756  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:22.793810  546345 cri.go:89] found id: ""
	I1202 22:33:22.793836  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.793845  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:22.793854  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:22.793866  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:22.856577  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:22.856615  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:22.872185  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:22.872221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:22.937005  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:22.937039  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:22.937052  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:22.961706  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:22.961743  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.491815  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.502275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:25.502392  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:25.526647  546345 cri.go:89] found id: ""
	I1202 22:33:25.526680  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.526688  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:25.526695  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:25.526767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:25.554949  546345 cri.go:89] found id: ""
	I1202 22:33:25.554970  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.554980  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:25.554986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:25.555043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:25.578929  546345 cri.go:89] found id: ""
	I1202 22:33:25.578953  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.578962  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:25.578968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:25.579044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:25.608022  546345 cri.go:89] found id: ""
	I1202 22:33:25.608056  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.608065  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:25.608088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:25.608169  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:25.636085  546345 cri.go:89] found id: ""
	I1202 22:33:25.636120  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.636130  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:25.636153  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:25.636235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:25.666823  546345 cri.go:89] found id: ""
	I1202 22:33:25.666856  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.666865  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:25.666873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:25.666942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:25.690601  546345 cri.go:89] found id: ""
	I1202 22:33:25.690635  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.690645  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:25.690652  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:25.690723  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:25.719343  546345 cri.go:89] found id: ""
	I1202 22:33:25.719379  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.719388  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:25.719396  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:25.719408  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:25.743724  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:25.743768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.771761  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:25.771786  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:25.828678  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:25.828713  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:25.844300  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:25.844332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:25.908308  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.409045  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.420392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:28.420486  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:28.451663  546345 cri.go:89] found id: ""
	I1202 22:33:28.451687  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.451696  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:28.451704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:28.451770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:28.480763  546345 cri.go:89] found id: ""
	I1202 22:33:28.480788  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.480797  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:28.480804  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:28.480888  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:28.505757  546345 cri.go:89] found id: ""
	I1202 22:33:28.505781  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.505789  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:28.505796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:28.505882  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:28.530092  546345 cri.go:89] found id: ""
	I1202 22:33:28.530124  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.530134  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:28.530141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:28.530202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:28.555441  546345 cri.go:89] found id: ""
	I1202 22:33:28.555468  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.555477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:28.555484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:28.555542  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:28.588393  546345 cri.go:89] found id: ""
	I1202 22:33:28.588414  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.588422  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:28.588429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:28.588498  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:28.615564  546345 cri.go:89] found id: ""
	I1202 22:33:28.615586  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.615595  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:28.615602  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:28.615663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:28.640294  546345 cri.go:89] found id: ""
	I1202 22:33:28.640316  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.640324  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:28.640333  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:28.640344  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:28.670446  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:28.670473  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:28.731540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:28.731583  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:28.747338  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:28.747365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:28.807964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.807987  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:28.808001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.332523  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.349889  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:31.349961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:31.381168  546345 cri.go:89] found id: ""
	I1202 22:33:31.381196  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.381204  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:31.381211  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:31.381274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:31.408915  546345 cri.go:89] found id: ""
	I1202 22:33:31.408947  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.408956  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:31.408963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:31.409025  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:31.433408  546345 cri.go:89] found id: ""
	I1202 22:33:31.433433  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.433441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:31.433448  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:31.433506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:31.457935  546345 cri.go:89] found id: ""
	I1202 22:33:31.457968  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.457976  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:31.457983  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:31.458053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:31.481621  546345 cri.go:89] found id: ""
	I1202 22:33:31.481694  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.481704  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:31.481711  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:31.481781  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:31.505764  546345 cri.go:89] found id: ""
	I1202 22:33:31.505789  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.505799  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:31.505805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:31.505864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:31.530522  546345 cri.go:89] found id: ""
	I1202 22:33:31.530557  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.530565  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:31.530572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:31.530639  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:31.558641  546345 cri.go:89] found id: ""
	I1202 22:33:31.558706  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.558720  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:31.558731  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:31.558747  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:31.614675  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:31.614707  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:31.630252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:31.630279  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:31.695335  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:31.695359  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:31.695372  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.719979  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:31.720013  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:34.252356  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.264856  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:34.264924  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:34.303387  546345 cri.go:89] found id: ""
	I1202 22:33:34.303422  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.303437  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:34.303445  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:34.303502  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:34.377615  546345 cri.go:89] found id: ""
	I1202 22:33:34.377643  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.377665  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:34.377673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:34.377750  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:34.409336  546345 cri.go:89] found id: ""
	I1202 22:33:34.409359  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.409367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:34.409374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:34.409433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:34.434153  546345 cri.go:89] found id: ""
	I1202 22:33:34.434175  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.434184  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:34.434190  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:34.434250  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:34.459524  546345 cri.go:89] found id: ""
	I1202 22:33:34.459549  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.459558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:34.459565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:34.459622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:34.487835  546345 cri.go:89] found id: ""
	I1202 22:33:34.487862  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.487871  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:34.487878  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:34.487939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:34.511616  546345 cri.go:89] found id: ""
	I1202 22:33:34.511638  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.511647  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:34.511654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:34.511712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:34.539284  546345 cri.go:89] found id: ""
	I1202 22:33:34.539307  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.539315  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:34.539324  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:34.539335  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:34.594370  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:34.594404  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:34.610176  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:34.610203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:34.674945  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:34.674968  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:34.674980  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:34.699820  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:34.699855  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:37.235245  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.245512  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:37.245580  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:37.270720  546345 cri.go:89] found id: ""
	I1202 22:33:37.270743  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.270751  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:37.270757  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:37.270818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:37.317208  546345 cri.go:89] found id: ""
	I1202 22:33:37.317236  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.317244  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:37.317250  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:37.317357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:37.381241  546345 cri.go:89] found id: ""
	I1202 22:33:37.381304  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.381319  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:37.381331  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:37.381391  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:37.406579  546345 cri.go:89] found id: ""
	I1202 22:33:37.406604  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.406613  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:37.406620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:37.406676  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:37.431035  546345 cri.go:89] found id: ""
	I1202 22:33:37.431061  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.431071  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:37.431078  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:37.431170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:37.455450  546345 cri.go:89] found id: ""
	I1202 22:33:37.455476  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.455485  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:37.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:37.455549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:37.479696  546345 cri.go:89] found id: ""
	I1202 22:33:37.479763  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.479784  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:37.479791  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:37.479864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:37.504424  546345 cri.go:89] found id: ""
	I1202 22:33:37.504449  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.504465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:37.504475  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:37.504486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:37.562929  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:37.562965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:37.578720  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:37.578749  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:37.643738  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:37.643758  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:37.643770  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:37.669355  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:37.669389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.197629  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.209725  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:40.209798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:40.235226  546345 cri.go:89] found id: ""
	I1202 22:33:40.235249  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.235258  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:40.235265  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:40.235323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:40.264913  546345 cri.go:89] found id: ""
	I1202 22:33:40.264938  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.264948  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:40.264955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:40.265014  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:40.292266  546345 cri.go:89] found id: ""
	I1202 22:33:40.292293  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.292302  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:40.292309  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:40.292366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:40.328677  546345 cri.go:89] found id: ""
	I1202 22:33:40.328703  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.328712  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:40.328718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:40.328779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:40.372520  546345 cri.go:89] found id: ""
	I1202 22:33:40.372553  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.372562  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:40.372570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:40.372637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:40.401860  546345 cri.go:89] found id: ""
	I1202 22:33:40.401896  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.401906  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:40.401913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:40.401981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:40.426706  546345 cri.go:89] found id: ""
	I1202 22:33:40.426774  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.426790  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:40.426797  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:40.426871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:40.450845  546345 cri.go:89] found id: ""
	I1202 22:33:40.450873  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.450882  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:40.450892  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:40.450921  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:40.466330  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:40.466359  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:40.530421  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:40.530440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:40.530471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:40.557935  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:40.557971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.589359  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:40.589413  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.149757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.160459  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:43.160531  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:43.185860  546345 cri.go:89] found id: ""
	I1202 22:33:43.185885  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.185893  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:43.185900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:43.185959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:43.213745  546345 cri.go:89] found id: ""
	I1202 22:33:43.213771  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.213782  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:43.213788  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:43.213845  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:43.238763  546345 cri.go:89] found id: ""
	I1202 22:33:43.238788  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.238796  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:43.238805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:43.238865  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:43.263259  546345 cri.go:89] found id: ""
	I1202 22:33:43.263285  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.263294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:43.263301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:43.263362  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:43.287780  546345 cri.go:89] found id: ""
	I1202 22:33:43.287804  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.287812  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:43.287818  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:43.287901  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:43.333797  546345 cri.go:89] found id: ""
	I1202 22:33:43.333819  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.333827  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:43.333833  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:43.333891  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:43.379712  546345 cri.go:89] found id: ""
	I1202 22:33:43.379734  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.379743  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:43.379749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:43.379808  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:43.415160  546345 cri.go:89] found id: ""
	I1202 22:33:43.415240  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.415264  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:43.415282  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:43.415306  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:43.442448  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:43.442475  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.497169  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:43.497207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:43.513334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:43.513370  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:43.577650  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:43.577691  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:43.577704  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.104276  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:46.114696  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:46.114770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:46.143775  546345 cri.go:89] found id: ""
	I1202 22:33:46.143798  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.143806  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:46.143813  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:46.143872  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:46.168484  546345 cri.go:89] found id: ""
	I1202 22:33:46.168508  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.168517  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:46.168527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:46.168585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:46.195213  546345 cri.go:89] found id: ""
	I1202 22:33:46.195236  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.195244  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:46.195251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:46.195316  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:46.218803  546345 cri.go:89] found id: ""
	I1202 22:33:46.218825  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.218833  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:46.218840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:46.218902  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:46.242627  546345 cri.go:89] found id: ""
	I1202 22:33:46.242649  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.242657  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:46.242664  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:46.242735  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:46.268270  546345 cri.go:89] found id: ""
	I1202 22:33:46.268299  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.268314  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:46.268322  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:46.268398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:46.303449  546345 cri.go:89] found id: ""
	I1202 22:33:46.303476  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.303484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:46.303491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:46.303547  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:46.355851  546345 cri.go:89] found id: ""
	I1202 22:33:46.355877  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.355886  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:46.355895  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:46.355906  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:46.372396  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:46.372426  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:46.448683  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:46.448707  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:46.448721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.472236  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:46.472269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:46.501830  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:46.501857  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.060676  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:49.071150  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:49.071224  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:49.095927  546345 cri.go:89] found id: ""
	I1202 22:33:49.095949  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.095963  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:49.095970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:49.096027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:49.121814  546345 cri.go:89] found id: ""
	I1202 22:33:49.121837  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.121846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:49.121853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:49.121911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:49.150554  546345 cri.go:89] found id: ""
	I1202 22:33:49.150582  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.150590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:49.150596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:49.150660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:49.174636  546345 cri.go:89] found id: ""
	I1202 22:33:49.174660  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.174668  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:49.174675  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:49.174757  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:49.198993  546345 cri.go:89] found id: ""
	I1202 22:33:49.199019  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.199028  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:49.199035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:49.199122  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:49.237206  546345 cri.go:89] found id: ""
	I1202 22:33:49.237280  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.237304  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:49.237327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:49.237412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:49.262326  546345 cri.go:89] found id: ""
	I1202 22:33:49.262395  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.262418  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:49.262437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:49.262508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:49.287127  546345 cri.go:89] found id: ""
	I1202 22:33:49.287192  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.287215  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:49.287239  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:49.287269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.365279  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:49.365438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:49.383138  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:49.383164  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:49.454034  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:49.454054  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:49.454066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:49.478949  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:49.478982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.007120  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:52.018354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:52.018431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:52.048418  546345 cri.go:89] found id: ""
	I1202 22:33:52.048502  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.048527  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:52.048554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:52.048670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:52.075756  546345 cri.go:89] found id: ""
	I1202 22:33:52.075795  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.075804  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:52.075811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:52.075875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:52.102101  546345 cri.go:89] found id: ""
	I1202 22:33:52.102128  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.102138  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:52.102145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:52.102213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:52.127350  546345 cri.go:89] found id: ""
	I1202 22:33:52.127375  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.127390  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:52.127397  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:52.127461  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:52.152298  546345 cri.go:89] found id: ""
	I1202 22:33:52.152325  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.152334  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:52.152340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:52.152398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:52.176927  546345 cri.go:89] found id: ""
	I1202 22:33:52.176952  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.176960  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:52.176966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:52.177023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:52.203976  546345 cri.go:89] found id: ""
	I1202 22:33:52.204003  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.204012  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:52.204018  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:52.204077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:52.229381  546345 cri.go:89] found id: ""
	I1202 22:33:52.229408  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.229416  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:52.229425  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:52.229443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:52.292540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:52.292561  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:52.292574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:52.324946  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:52.325102  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.369542  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:52.369568  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:52.436122  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:52.436159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:54.953633  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:54.963990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:54.964062  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:54.991840  546345 cri.go:89] found id: ""
	I1202 22:33:54.991865  546345 logs.go:282] 0 containers: []
	W1202 22:33:54.991873  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:54.991880  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:54.991937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:55.024217  546345 cri.go:89] found id: ""
	I1202 22:33:55.024241  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.024250  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:55.024258  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:55.024320  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:55.048985  546345 cri.go:89] found id: ""
	I1202 22:33:55.049007  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.049015  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:55.049021  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:55.049086  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:55.073787  546345 cri.go:89] found id: ""
	I1202 22:33:55.073809  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.073818  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:55.073825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:55.073887  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:55.097827  546345 cri.go:89] found id: ""
	I1202 22:33:55.097849  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.097857  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:55.097864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:55.097929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:55.127096  546345 cri.go:89] found id: ""
	I1202 22:33:55.127119  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.127127  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:55.127135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:55.127247  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:55.155895  546345 cri.go:89] found id: ""
	I1202 22:33:55.155920  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.155929  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:55.155936  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:55.155998  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:55.184917  546345 cri.go:89] found id: ""
	I1202 22:33:55.184943  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.184951  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:55.184960  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:55.184973  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:55.245409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:55.245430  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:55.245443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:55.269272  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:55.269303  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:55.324186  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:55.324256  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:55.407948  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:55.408021  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:57.927547  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:57.938134  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:57.938208  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:57.966983  546345 cri.go:89] found id: ""
	I1202 22:33:57.967016  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.967025  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:57.967031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:57.967090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:57.990911  546345 cri.go:89] found id: ""
	I1202 22:33:57.990934  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.990942  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:57.990949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:57.991006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:58.027051  546345 cri.go:89] found id: ""
	I1202 22:33:58.027076  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.027085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:58.027091  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:58.027170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:58.052767  546345 cri.go:89] found id: ""
	I1202 22:33:58.052791  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.052801  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:58.052808  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:58.052866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:58.077589  546345 cri.go:89] found id: ""
	I1202 22:33:58.077616  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.077626  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:58.077634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:58.077736  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:58.102352  546345 cri.go:89] found id: ""
	I1202 22:33:58.102377  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.102385  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:58.102394  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:58.102453  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:58.127151  546345 cri.go:89] found id: ""
	I1202 22:33:58.127174  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.127183  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:58.127203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:58.127264  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:58.153068  546345 cri.go:89] found id: ""
	I1202 22:33:58.153097  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.153106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:58.153116  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:58.153128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:58.207341  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:58.207375  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:58.223908  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:58.223993  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:58.303303  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:58.303374  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:58.303401  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:58.339284  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:58.339358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:00.884684  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:00.894955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:00.895043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:00.919607  546345 cri.go:89] found id: ""
	I1202 22:34:00.919638  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.919648  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:00.919655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:00.919714  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:00.943845  546345 cri.go:89] found id: ""
	I1202 22:34:00.943869  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.943877  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:00.943883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:00.943942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:00.969291  546345 cri.go:89] found id: ""
	I1202 22:34:00.969316  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.969325  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:00.969332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:00.969387  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:00.998170  546345 cri.go:89] found id: ""
	I1202 22:34:00.998194  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.998203  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:00.998210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:00.998267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:01.028082  546345 cri.go:89] found id: ""
	I1202 22:34:01.028108  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.028118  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:01.028125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:01.028182  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:01.052163  546345 cri.go:89] found id: ""
	I1202 22:34:01.052190  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.052198  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:01.052204  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:01.052261  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:01.079605  546345 cri.go:89] found id: ""
	I1202 22:34:01.079638  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.079648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:01.079655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:01.079727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:01.104672  546345 cri.go:89] found id: ""
	I1202 22:34:01.104697  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.104705  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:01.104714  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:01.104727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:01.168637  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:01.168689  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:01.186088  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:01.186120  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:01.254373  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:01.254405  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:01.254421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:01.279534  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:01.279570  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.844056  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:03.854485  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:03.854559  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:03.883518  546345 cri.go:89] found id: ""
	I1202 22:34:03.883539  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.883547  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:03.883555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:03.883616  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:03.907609  546345 cri.go:89] found id: ""
	I1202 22:34:03.907634  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.907643  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:03.907650  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:03.907708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:03.931661  546345 cri.go:89] found id: ""
	I1202 22:34:03.931686  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.931694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:03.931701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:03.931762  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:03.956212  546345 cri.go:89] found id: ""
	I1202 22:34:03.956236  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.956245  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:03.956252  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:03.956310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:03.982858  546345 cri.go:89] found id: ""
	I1202 22:34:03.982882  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.982890  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:03.982899  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:03.982955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:04.008609  546345 cri.go:89] found id: ""
	I1202 22:34:04.008637  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.008646  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:04.008654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:04.008718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:04.034395  546345 cri.go:89] found id: ""
	I1202 22:34:04.034426  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.034436  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:04.034443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:04.034503  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:04.059450  546345 cri.go:89] found id: ""
	I1202 22:34:04.059474  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.059482  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:04.059492  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:04.059503  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:04.116204  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:04.116237  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:04.131753  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:04.131779  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:04.195398  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:04.195417  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:04.195431  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:04.220265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:04.220302  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:06.748017  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:06.758416  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:06.758487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:06.786850  546345 cri.go:89] found id: ""
	I1202 22:34:06.786877  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.786886  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:06.786893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:06.786958  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:06.811248  546345 cri.go:89] found id: ""
	I1202 22:34:06.811274  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.811283  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:06.811290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:06.811352  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:06.835885  546345 cri.go:89] found id: ""
	I1202 22:34:06.835911  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.835920  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:06.835927  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:06.835986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:06.861031  546345 cri.go:89] found id: ""
	I1202 22:34:06.861057  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.861066  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:06.861076  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:06.861137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:06.885492  546345 cri.go:89] found id: ""
	I1202 22:34:06.885518  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.885526  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:06.885533  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:06.885621  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:06.911207  546345 cri.go:89] found id: ""
	I1202 22:34:06.911233  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.911242  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:06.911249  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:06.911307  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:06.936761  546345 cri.go:89] found id: ""
	I1202 22:34:06.936786  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.936794  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:06.936801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:06.936858  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:06.961200  546345 cri.go:89] found id: ""
	I1202 22:34:06.961225  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.961233  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:06.961242  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:06.961253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:07.017396  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:07.017432  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:07.033140  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:07.033220  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:07.098724  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:07.098749  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:07.098764  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:07.123278  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:07.123313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:09.654822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:09.666550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:09.666631  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:09.696479  546345 cri.go:89] found id: ""
	I1202 22:34:09.696501  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.696510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:09.696516  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:09.696573  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:09.720695  546345 cri.go:89] found id: ""
	I1202 22:34:09.720717  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.720725  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:09.720732  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:09.720789  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:09.743340  546345 cri.go:89] found id: ""
	I1202 22:34:09.743366  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.743374  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:09.743381  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:09.743441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:09.771827  546345 cri.go:89] found id: ""
	I1202 22:34:09.771851  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.771859  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:09.771866  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:09.771942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:09.800440  546345 cri.go:89] found id: ""
	I1202 22:34:09.800511  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.800522  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:09.800529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:09.800599  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:09.827898  546345 cri.go:89] found id: ""
	I1202 22:34:09.827933  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.827942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:09.827949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:09.828053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:09.851874  546345 cri.go:89] found id: ""
	I1202 22:34:09.851909  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.851918  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:09.851925  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:09.852023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:09.876063  546345 cri.go:89] found id: ""
	I1202 22:34:09.876098  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.876106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:09.876136  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:09.876157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:09.931102  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:09.931140  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:09.947006  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:09.947033  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:10.016167  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:10.016189  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:10.016202  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:10.042713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:10.042746  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.574841  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:12.602704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:12.602776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:12.630259  546345 cri.go:89] found id: ""
	I1202 22:34:12.630283  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.630291  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:12.630298  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:12.630356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:12.653540  546345 cri.go:89] found id: ""
	I1202 22:34:12.653571  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.653580  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:12.653587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:12.653726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:12.678660  546345 cri.go:89] found id: ""
	I1202 22:34:12.678685  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.678694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:12.678701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:12.678761  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:12.702118  546345 cri.go:89] found id: ""
	I1202 22:34:12.702147  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.702155  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:12.702162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:12.702262  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:12.729590  546345 cri.go:89] found id: ""
	I1202 22:34:12.729615  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.729624  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:12.729631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:12.729713  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:12.755560  546345 cri.go:89] found id: ""
	I1202 22:34:12.755586  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.755594  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:12.755601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:12.755656  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:12.788269  546345 cri.go:89] found id: ""
	I1202 22:34:12.788293  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.788302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:12.788308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:12.788366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:12.812214  546345 cri.go:89] found id: ""
	I1202 22:34:12.812239  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.812248  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:12.812257  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:12.812268  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.841941  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:12.841966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:12.896188  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:12.896219  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:12.911694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:12.911721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:12.975342  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:12.975377  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:12.975389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.502887  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:15.513338  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:15.513418  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:15.536875  546345 cri.go:89] found id: ""
	I1202 22:34:15.536897  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.536905  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:15.536911  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:15.536970  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:15.573309  546345 cri.go:89] found id: ""
	I1202 22:34:15.573335  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.573360  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:15.573368  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:15.573433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:15.623126  546345 cri.go:89] found id: ""
	I1202 22:34:15.623149  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.623157  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:15.623164  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:15.623221  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:15.657458  546345 cri.go:89] found id: ""
	I1202 22:34:15.657484  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.657493  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:15.657500  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:15.657568  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:15.681354  546345 cri.go:89] found id: ""
	I1202 22:34:15.681380  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.681389  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:15.681395  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:15.681456  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:15.705775  546345 cri.go:89] found id: ""
	I1202 22:34:15.705848  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.705874  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:15.705894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:15.705971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:15.731425  546345 cri.go:89] found id: ""
	I1202 22:34:15.731448  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.731457  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:15.731464  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:15.731521  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:15.755658  546345 cri.go:89] found id: ""
	I1202 22:34:15.755682  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.755690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:15.755699  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:15.755711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:15.811079  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:15.811113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:15.827246  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:15.827272  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:15.889878  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:15.889899  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:15.889912  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.915317  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:15.915350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.445059  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:18.458773  546345 out.go:203] 
	W1202 22:34:18.461733  546345 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1202 22:34:18.461774  546345 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1202 22:34:18.461784  546345 out.go:285] * Related issues:
	W1202 22:34:18.461797  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1202 22:34:18.461818  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1202 22:34:18.464650  546345 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311616018Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311627086Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311638754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311647836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311661645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311673271Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311695220Z" level=info msg="runtime interface created"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311701619Z" level=info msg="created NRI interface"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311714862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311747001Z" level=info msg="Connect containerd service"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311985262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.313086719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326105435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326173330Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326202376Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326244262Z" level=info msg="Start recovering state"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346481012Z" level=info msg="Start event monitor"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346541925Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346551911Z" level=info msg="Start streaming server"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346565096Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346574146Z" level=info msg="runtime interface starting up..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346580505Z" level=info msg="starting plugins..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346733550Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:28:16 newest-cni-250247 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.348731317Z" level=info msg="containerd successfully booted in 0.056907s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:21.737127   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:21.737855   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:21.738678   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:21.740117   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:21.740630   13485 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:34:21 up  4:16,  0 user,  load average: 1.85, 1.03, 1.12
	Linux newest-cni-250247 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:34:18 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:19 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 02 22:34:19 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:19 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:19 newest-cni-250247 kubelet[13367]: E1202 22:34:19.413309   13367 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:19 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:19 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:20 newest-cni-250247 kubelet[13372]: E1202 22:34:20.096996   13372 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:20 newest-cni-250247 kubelet[13392]: E1202 22:34:20.855276   13392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:20 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:21 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 02 22:34:21 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:21 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:21 newest-cni-250247 kubelet[13450]: E1202 22:34:21.602944   13450 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:21 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:21 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (353.653402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-250247" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (373.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:29:44.122424  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:31:17.027101  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:31:28.655346  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:32:36.514293  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:32:50.414594  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:32:51.719394  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:34:44.123129  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:36:17.026678  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:36:28.654680  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:37:19.591390  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:37:36.513892  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:37:40.091702  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
I1202 22:37:49.730551  263241 config.go:182] Loaded profile config "kindnet-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 2 (442.672248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 539728,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:23:22.642400086Z",
	            "FinishedAt": "2025-12-02T22:23:21.316417439Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2c027d5072096e798c0b710c59b479b1cd1269246af142ef5e7ac6eb2231d21",
	            "SandboxKey": "/var/run/docker/netns/b2c027d50720",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:71:1d:c1:74:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "d640ee5b3f22cc33822a769221598d10c33902fafb82f4150c227e00cda4eee4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 2 (417.080499ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                      │    PROFILE     │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-577910 sudo systemctl status kubelet --all --full --no-pager                                                                        │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo systemctl cat kubelet --no-pager                                                                                        │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo journalctl -xeu kubelet --all --full --no-pager                                                                         │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /etc/kubernetes/kubelet.conf                                                                                        │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /var/lib/kubelet/config.yaml                                                                                        │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo systemctl status docker --all --full --no-pager                                                                         │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo systemctl cat docker --no-pager                                                                                         │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /etc/docker/daemon.json                                                                                             │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo docker system info                                                                                                      │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo systemctl status cri-docker --all --full --no-pager                                                                     │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo systemctl cat cri-docker --no-pager                                                                                     │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                          │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cri-dockerd --version                                                                                                   │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo systemctl status containerd --all --full --no-pager                                                                     │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo systemctl cat containerd --no-pager                                                                                     │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /lib/systemd/system/containerd.service                                                                              │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo cat /etc/containerd/config.toml                                                                                         │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo containerd config dump                                                                                                  │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo systemctl status crio --all --full --no-pager                                                                           │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	│ ssh     │ -p kindnet-577910 sudo systemctl cat crio --no-pager                                                                                           │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                 │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ ssh     │ -p kindnet-577910 sudo crio config                                                                                                             │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ delete  │ -p kindnet-577910                                                                                                                              │ kindnet-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │ 02 Dec 25 22:38 UTC │
	│ start   │ -p flannel-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd │ flannel-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:38 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:38:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:38:18.946988  578590 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:38:18.947167  578590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:38:18.947200  578590 out.go:374] Setting ErrFile to fd 2...
	I1202 22:38:18.947222  578590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:38:18.947515  578590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:38:18.947968  578590 out.go:368] Setting JSON to false
	I1202 22:38:18.948902  578590 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15637,"bootTime":1764699462,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:38:18.949011  578590 start.go:143] virtualization:  
	I1202 22:38:18.955209  578590 out.go:179] * [flannel-577910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:38:18.958980  578590 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:38:18.959064  578590 notify.go:221] Checking for updates...
	I1202 22:38:18.965837  578590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:38:18.969073  578590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:38:18.972204  578590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:38:18.975311  578590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:38:18.978299  578590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:38:18.981910  578590 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:38:18.982054  578590 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:38:19.018065  578590 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:38:19.018188  578590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:38:19.096099  578590 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:38:19.08596458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:38:19.096205  578590 docker.go:319] overlay module found
	I1202 22:38:19.100630  578590 out.go:179] * Using the docker driver based on user configuration
	I1202 22:38:19.103662  578590 start.go:309] selected driver: docker
	I1202 22:38:19.103684  578590 start.go:927] validating driver "docker" against <nil>
	I1202 22:38:19.103700  578590 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:38:19.104418  578590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:38:19.160349  578590 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:38:19.150754346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:38:19.160526  578590 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 22:38:19.160795  578590 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:38:19.163908  578590 out.go:179] * Using Docker driver with root privileges
	I1202 22:38:19.166823  578590 cni.go:84] Creating CNI manager for "flannel"
	I1202 22:38:19.166844  578590 start_flags.go:336] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1202 22:38:19.166931  578590 start.go:353] cluster config:
	{Name:flannel-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:38:19.171922  578590 out.go:179] * Starting "flannel-577910" primary control-plane node in "flannel-577910" cluster
	I1202 22:38:19.174774  578590 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:38:19.177790  578590 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:38:19.180676  578590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:38:19.180734  578590 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1202 22:38:19.180750  578590 cache.go:65] Caching tarball of preloaded images
	I1202 22:38:19.180753  578590 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:38:19.180890  578590 preload.go:238] Found /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1202 22:38:19.180902  578590 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1202 22:38:19.181026  578590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/config.json ...
	I1202 22:38:19.181076  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/config.json: {Name:mkac7866bf6537d1b9db1ecc91c858cf0e6fdb31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:19.199911  578590 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:38:19.199933  578590 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:38:19.199952  578590 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:38:19.199983  578590 start.go:360] acquireMachinesLock for flannel-577910: {Name:mkc734926f5056d68fd4d192d9a409bf4c806f76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:38:19.200092  578590 start.go:364] duration metric: took 88.867µs to acquireMachinesLock for "flannel-577910"
	I1202 22:38:19.200123  578590 start.go:93] Provisioning new machine with config: &{Name:flannel-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-577910 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:38:19.200198  578590 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:38:19.203650  578590 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:38:19.203872  578590 start.go:159] libmachine.API.Create for "flannel-577910" (driver="docker")
	I1202 22:38:19.203908  578590 client.go:173] LocalClient.Create starting
	I1202 22:38:19.203999  578590 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:38:19.204036  578590 main.go:143] libmachine: Decoding PEM data...
	I1202 22:38:19.204056  578590 main.go:143] libmachine: Parsing certificate...
	I1202 22:38:19.204120  578590 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:38:19.204142  578590 main.go:143] libmachine: Decoding PEM data...
	I1202 22:38:19.204161  578590 main.go:143] libmachine: Parsing certificate...
	I1202 22:38:19.204526  578590 cli_runner.go:164] Run: docker network inspect flannel-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:38:19.219972  578590 cli_runner.go:211] docker network inspect flannel-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:38:19.220059  578590 network_create.go:284] running [docker network inspect flannel-577910] to gather additional debugging logs...
	I1202 22:38:19.220082  578590 cli_runner.go:164] Run: docker network inspect flannel-577910
	W1202 22:38:19.238366  578590 cli_runner.go:211] docker network inspect flannel-577910 returned with exit code 1
	I1202 22:38:19.238396  578590 network_create.go:287] error running [docker network inspect flannel-577910]: docker network inspect flannel-577910: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-577910 not found
	I1202 22:38:19.238410  578590 network_create.go:289] output of [docker network inspect flannel-577910]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-577910 not found
	
	** /stderr **
	I1202 22:38:19.238514  578590 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:38:19.256007  578590 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:38:19.256352  578590 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:38:19.256695  578590 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:38:19.256957  578590 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:38:19.257344  578590 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a019f0}
	I1202 22:38:19.257366  578590 network_create.go:124] attempt to create docker network flannel-577910 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:38:19.257424  578590 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-577910 flannel-577910
	I1202 22:38:19.317243  578590 network_create.go:108] docker network flannel-577910 192.168.85.0/24 created
	I1202 22:38:19.317277  578590 kic.go:121] calculated static IP "192.168.85.2" for the "flannel-577910" container
	I1202 22:38:19.317349  578590 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:38:19.332903  578590 cli_runner.go:164] Run: docker volume create flannel-577910 --label name.minikube.sigs.k8s.io=flannel-577910 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:38:19.349570  578590 oci.go:103] Successfully created a docker volume flannel-577910
	I1202 22:38:19.349684  578590 cli_runner.go:164] Run: docker run --rm --name flannel-577910-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-577910 --entrypoint /usr/bin/test -v flannel-577910:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:38:19.863005  578590 oci.go:107] Successfully prepared a docker volume flannel-577910
	I1202 22:38:19.863072  578590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:38:19.863084  578590 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 22:38:19.863171  578590 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-577910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 22:38:23.837996  578590 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v flannel-577910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.974768141s)
	I1202 22:38:23.838035  578590 kic.go:203] duration metric: took 3.974942823s to extract preloaded images to volume ...
	W1202 22:38:23.838172  578590 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:38:23.838284  578590 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:38:23.888680  578590 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-577910 --name flannel-577910 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-577910 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-577910 --network flannel-577910 --ip 192.168.85.2 --volume flannel-577910:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:38:24.214800  578590 cli_runner.go:164] Run: docker container inspect flannel-577910 --format={{.State.Running}}
	I1202 22:38:24.242292  578590 cli_runner.go:164] Run: docker container inspect flannel-577910 --format={{.State.Status}}
	I1202 22:38:24.265026  578590 cli_runner.go:164] Run: docker exec flannel-577910 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:38:24.336509  578590 oci.go:144] the created container "flannel-577910" has a running status.
	I1202 22:38:24.336540  578590 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa...
	I1202 22:38:24.749340  578590 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:38:24.777323  578590 cli_runner.go:164] Run: docker container inspect flannel-577910 --format={{.State.Status}}
	I1202 22:38:24.810136  578590 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:38:24.810156  578590 kic_runner.go:114] Args: [docker exec --privileged flannel-577910 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:38:24.862843  578590 cli_runner.go:164] Run: docker container inspect flannel-577910 --format={{.State.Status}}
	I1202 22:38:24.907377  578590 machine.go:94] provisionDockerMachine start ...
	I1202 22:38:24.907493  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:24.934724  578590 main.go:143] libmachine: Using SSH client type: native
	I1202 22:38:24.935084  578590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1202 22:38:24.935094  578590 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:38:25.137496  578590 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-577910
	
	I1202 22:38:25.137536  578590 ubuntu.go:182] provisioning hostname "flannel-577910"
	I1202 22:38:25.137630  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:25.158926  578590 main.go:143] libmachine: Using SSH client type: native
	I1202 22:38:25.159237  578590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1202 22:38:25.159253  578590 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-577910 && echo "flannel-577910" | sudo tee /etc/hostname
	I1202 22:38:25.323608  578590 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-577910
	
	I1202 22:38:25.323683  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:25.346186  578590 main.go:143] libmachine: Using SSH client type: native
	I1202 22:38:25.346491  578590 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1202 22:38:25.346508  578590 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-577910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-577910/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-577910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:38:25.501831  578590 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:38:25.501860  578590 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:38:25.501880  578590 ubuntu.go:190] setting up certificates
	I1202 22:38:25.501889  578590 provision.go:84] configureAuth start
	I1202 22:38:25.501947  578590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-577910
	I1202 22:38:25.520031  578590 provision.go:143] copyHostCerts
	I1202 22:38:25.520112  578590 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:38:25.520125  578590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:38:25.520203  578590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:38:25.520342  578590 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:38:25.520355  578590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:38:25.520388  578590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:38:25.520448  578590 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:38:25.520456  578590 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:38:25.520481  578590 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:38:25.520536  578590 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.flannel-577910 san=[127.0.0.1 192.168.85.2 flannel-577910 localhost minikube]
	I1202 22:38:25.947002  578590 provision.go:177] copyRemoteCerts
	I1202 22:38:25.947103  578590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:38:25.947153  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:25.965739  578590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa Username:docker}
	I1202 22:38:26.073969  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:38:26.092331  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:38:26.110938  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1202 22:38:26.127641  578590 provision.go:87] duration metric: took 625.729317ms to configureAuth
	I1202 22:38:26.127669  578590 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:38:26.127858  578590 config.go:182] Loaded profile config "flannel-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 22:38:26.127871  578590 machine.go:97] duration metric: took 1.220475951s to provisionDockerMachine
	I1202 22:38:26.127878  578590 client.go:176] duration metric: took 6.923959499s to LocalClient.Create
	I1202 22:38:26.127897  578590 start.go:167] duration metric: took 6.924026533s to libmachine.API.Create "flannel-577910"
	I1202 22:38:26.127907  578590 start.go:293] postStartSetup for "flannel-577910" (driver="docker")
	I1202 22:38:26.127917  578590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:38:26.127970  578590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:38:26.128011  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:26.144457  578590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa Username:docker}
	I1202 22:38:26.245416  578590 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:38:26.248459  578590 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:38:26.248486  578590 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:38:26.248497  578590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:38:26.248549  578590 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:38:26.248630  578590 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:38:26.248735  578590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:38:26.256195  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:38:26.273313  578590 start.go:296] duration metric: took 145.39066ms for postStartSetup
	I1202 22:38:26.273721  578590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-577910
	I1202 22:38:26.289642  578590 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/config.json ...
	I1202 22:38:26.289922  578590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:38:26.289971  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:26.306286  578590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa Username:docker}
	I1202 22:38:26.406598  578590 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:38:26.411058  578590 start.go:128] duration metric: took 7.210845827s to createHost
	I1202 22:38:26.411086  578590 start.go:83] releasing machines lock for "flannel-577910", held for 7.210979968s
	I1202 22:38:26.411159  578590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-577910
	I1202 22:38:26.427553  578590 ssh_runner.go:195] Run: cat /version.json
	I1202 22:38:26.427612  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:26.427562  578590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:38:26.427701  578590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-577910
	I1202 22:38:26.448811  578590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa Username:docker}
	I1202 22:38:26.452269  578590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/flannel-577910/id_rsa Username:docker}
	I1202 22:38:26.553894  578590 ssh_runner.go:195] Run: systemctl --version
	I1202 22:38:26.651427  578590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:38:26.656098  578590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:38:26.656239  578590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:38:26.684871  578590 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:38:26.684894  578590 start.go:496] detecting cgroup driver to use...
	I1202 22:38:26.684926  578590 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:38:26.684981  578590 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:38:26.699948  578590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:38:26.712534  578590 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:38:26.712642  578590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:38:26.729950  578590 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:38:26.747670  578590 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:38:26.862976  578590 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:38:26.988813  578590 docker.go:234] disabling docker service ...
	I1202 22:38:26.988904  578590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:38:27.011330  578590 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:38:27.025314  578590 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:38:27.145301  578590 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:38:27.256135  578590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:38:27.272060  578590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:38:27.287027  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:38:27.296938  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:38:27.307110  578590 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:38:27.307181  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:38:27.316573  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:38:27.325918  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:38:27.337629  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:38:27.347853  578590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:38:27.356661  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:38:27.365622  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:38:27.374451  578590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:38:27.383278  578590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:38:27.391120  578590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:38:27.398616  578590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:38:27.505727  578590 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:38:27.634553  578590 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:38:27.634666  578590 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:38:27.638720  578590 start.go:564] Will wait 60s for crictl version
	I1202 22:38:27.638827  578590 ssh_runner.go:195] Run: which crictl
	I1202 22:38:27.642634  578590 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:38:27.674908  578590 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:38:27.675027  578590 ssh_runner.go:195] Run: containerd --version
	I1202 22:38:27.693617  578590 ssh_runner.go:195] Run: containerd --version
	I1202 22:38:27.717698  578590 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	I1202 22:38:27.720684  578590 cli_runner.go:164] Run: docker network inspect flannel-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:38:27.736125  578590 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:38:27.740005  578590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:38:27.749299  578590 kubeadm.go:884] updating cluster {Name:flannel-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:38:27.749425  578590 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:38:27.749497  578590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:38:27.774783  578590 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:38:27.774808  578590 containerd.go:534] Images already preloaded, skipping extraction
	I1202 22:38:27.774870  578590 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:38:27.802080  578590 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:38:27.802108  578590 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:38:27.802117  578590 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1202 22:38:27.802898  578590 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-577910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:flannel-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1202 22:38:27.802985  578590 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:38:27.827216  578590 cni.go:84] Creating CNI manager for "flannel"
	I1202 22:38:27.827256  578590 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:38:27.827280  578590 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-577910 NodeName:flannel-577910 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:38:27.827398  578590 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "flannel-577910"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:38:27.827479  578590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 22:38:27.835182  578590 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:38:27.835256  578590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:38:27.842902  578590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1202 22:38:27.855024  578590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:38:27.867438  578590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1202 22:38:27.880178  578590 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:38:27.883621  578590 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:38:27.893161  578590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:38:28.006377  578590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:38:28.024108  578590 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910 for IP: 192.168.85.2
	I1202 22:38:28.024176  578590 certs.go:195] generating shared ca certs ...
	I1202 22:38:28.024214  578590 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.024411  578590 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:38:28.024484  578590 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:38:28.024507  578590 certs.go:257] generating profile certs ...
	I1202 22:38:28.024600  578590 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.key
	I1202 22:38:28.024638  578590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.crt with IP's: []
	I1202 22:38:28.138216  578590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.crt ...
	I1202 22:38:28.138248  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.crt: {Name:mk9bab7b6b37cfab7cdae1ed4d4934030f1ed59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.138443  578590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.key ...
	I1202 22:38:28.138459  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.key: {Name:mk38afd82c268a97467b378cdf5ceb17dd5d617d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.138558  578590 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key.04d55c14
	I1202 22:38:28.138578  578590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt.04d55c14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:38:28.444602  578590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt.04d55c14 ...
	I1202 22:38:28.444634  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt.04d55c14: {Name:mkc7855b8997c0c651ec3499683249d7ddbd3cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.444893  578590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key.04d55c14 ...
	I1202 22:38:28.444910  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key.04d55c14: {Name:mk09e485c67d821773ed4062f39f1687d3ea2e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.445002  578590 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt.04d55c14 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt
	I1202 22:38:28.445080  578590 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key.04d55c14 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key
	I1202 22:38:28.445140  578590 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.key
	I1202 22:38:28.445158  578590 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.crt with IP's: []
	I1202 22:38:28.544091  578590 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.crt ...
	I1202 22:38:28.544120  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.crt: {Name:mk83923253501449535f2829050d57eaab4de6a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.544301  578590 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.key ...
	I1202 22:38:28.544315  578590 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.key: {Name:mkfefe94358e53ce1cc8e79e60828ae915327eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:38:28.544508  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:38:28.544553  578590 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:38:28.544569  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:38:28.544599  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:38:28.544627  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:38:28.544655  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:38:28.544702  578590 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:38:28.545266  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:38:28.562256  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:38:28.580267  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:38:28.598530  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:38:28.615251  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 22:38:28.632008  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 22:38:28.648270  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:38:28.664647  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:38:28.680734  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:38:28.696821  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:38:28.712875  578590 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:38:28.728678  578590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:38:28.740570  578590 ssh_runner.go:195] Run: openssl version
	I1202 22:38:28.746502  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:38:28.754218  578590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:38:28.757499  578590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:38:28.757577  578590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:38:28.798759  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:38:28.809372  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:38:28.825561  578590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:38:28.830316  578590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:38:28.830429  578590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:38:28.879985  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:38:28.888770  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:38:28.897626  578590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:38:28.901421  578590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:38:28.901531  578590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:38:28.942187  578590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:38:28.952926  578590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:38:28.956378  578590 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:38:28.956430  578590 kubeadm.go:401] StartCluster: {Name:flannel-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:flannel-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:38:28.956502  578590 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:38:28.956556  578590 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:38:28.981580  578590 cri.go:89] found id: ""
	I1202 22:38:28.981717  578590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:38:28.989217  578590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:38:28.996646  578590 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:38:28.996709  578590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:38:29.005476  578590 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:38:29.005499  578590 kubeadm.go:158] found existing configuration files:
	
	I1202 22:38:29.005577  578590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:38:29.013359  578590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:38:29.013481  578590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:38:29.020959  578590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:38:29.028391  578590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:38:29.028475  578590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:38:29.035472  578590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:38:29.042819  578590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:38:29.042908  578590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:38:29.050281  578590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:38:29.057732  578590 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:38:29.057813  578590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:38:29.065114  578590 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:38:29.104611  578590 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 22:38:29.104911  578590 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:38:29.126391  578590 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:38:29.126466  578590 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:38:29.126505  578590 kubeadm.go:319] OS: Linux
	I1202 22:38:29.126555  578590 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:38:29.126607  578590 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:38:29.126659  578590 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:38:29.126710  578590 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:38:29.126766  578590 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:38:29.126820  578590 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:38:29.126869  578590 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:38:29.126921  578590 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:38:29.126971  578590 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:38:29.195553  578590 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:38:29.195667  578590 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:38:29.195762  578590 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:38:29.202154  578590 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569235694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569248986Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569265026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569277383Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569295622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569310916Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569325226Z" level=info msg="runtime interface created"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569332503Z" level=info msg="created NRI interface"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569349447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569391611Z" level=info msg="Connect containerd service"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569647005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.570228722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580119279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580197307Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580232687Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580281530Z" level=info msg="Start recovering state"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600449758Z" level=info msg="Start event monitor"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600517374Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600528393Z" level=info msg="Start streaming server"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600540085Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600549102Z" level=info msg="runtime interface starting up..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600555978Z" level=info msg="starting plugins..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600709368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:23:28 no-preload-904303 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.601830385Z" level=info msg="containerd successfully booted in 0.051957s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:38:34.278009    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:38:34.278770    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:38:34.280636    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:38:34.281009    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:38:34.282422    8138 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:38:34 up  4:20,  0 user,  load average: 2.02, 1.31, 1.20
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:38:31 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:38:31 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1202.
	Dec 02 22:38:31 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:31 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:31 no-preload-904303 kubelet[8004]: E1202 22:38:31.862454    8004 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:38:31 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:38:31 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:38:32 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1203.
	Dec 02 22:38:32 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:32 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:32 no-preload-904303 kubelet[8010]: E1202 22:38:32.650697    8010 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:38:32 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:38:32 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:38:33 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1204.
	Dec 02 22:38:33 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:33 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:33 no-preload-904303 kubelet[8038]: E1202 22:38:33.421398    8038 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:38:33 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:38:33 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:38:34 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1205.
	Dec 02 22:38:34 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:34 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:38:34 no-preload-904303 kubelet[8118]: E1202 22:38:34.142422    8118 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:38:34 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:38:34 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 2 (500.899292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-250247 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (304.28772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-250247 -n newest-cni-250247
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (321.777827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-250247 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (320.47695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-250247 -n newest-cni-250247
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (302.384293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250247
helpers_test.go:243: (dbg) docker inspect newest-cni-250247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	        "Created": "2025-12-02T22:17:45.695373395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 546476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:28:10.516417593Z",
	            "FinishedAt": "2025-12-02T22:28:08.91957983Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2-json.log",
	        "Name": "/newest-cni-250247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	                "LowerDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250247",
	                "Source": "/var/lib/docker/volumes/newest-cni-250247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250247",
	                "name.minikube.sigs.k8s.io": "newest-cni-250247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19a1ca374f2ac15ceeb8732ad47e7e4e789db7b4dc20ead5353b14dfc8ce4376",
	            "SandboxKey": "/var/run/docker/netns/19a1ca374f2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:22:6b:2b:a3:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfffc9981d9cab6ce5981c2e79bfb0dd15ae8455b64d0bfc795000bbbe273d91",
	                    "EndpointID": "6077ce03ce851ef49c2205e3affa2e3c9a93685b0b2e5a16a743470850763606",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250247",
	                        "8d631b193c97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (337.917476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25: (1.552416516s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:26 UTC │                     │
	│ stop    │ -p newest-cni-250247 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-250247 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │                     │
	│ image   │ newest-cni-250247 image list --format=json                                                                                                                                                                                                                 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	│ pause   │ -p newest-cni-250247 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	│ unpause │ -p newest-cni-250247 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:28:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:28:09.982860  546345 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:09.982990  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983001  546345 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:09.983006  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983258  546345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:28:09.983629  546345 out.go:368] Setting JSON to false
	I1202 22:28:09.984474  546345 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15028,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:28:09.984540  546345 start.go:143] virtualization:  
	I1202 22:28:09.987326  546345 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:28:09.991071  546345 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:28:09.991190  546345 notify.go:221] Checking for updates...
	I1202 22:28:09.996957  546345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:28:09.999951  546345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:10.003165  546345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:28:10.010024  546345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:28:10.023215  546345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:28:10.026934  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:10.027740  546345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:28:10.065520  546345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:28:10.065629  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.146197  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.137008488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.146302  546345 docker.go:319] overlay module found
	I1202 22:28:10.149701  546345 out.go:179] * Using the docker driver based on existing profile
	I1202 22:28:10.152553  546345 start.go:309] selected driver: docker
	I1202 22:28:10.152579  546345 start.go:927] validating driver "docker" against &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.152714  546345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:28:10.153449  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.206765  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.197797072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.207092  546345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:28:10.207126  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:10.207191  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:10.207234  546345 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.210373  546345 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:28:10.213164  546345 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:28:10.216139  546345 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:28:10.218905  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:10.218974  546345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:28:10.241012  546345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:28:10.241034  546345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:28:10.277912  546345 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:28:10.461684  546345 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:28:10.461922  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.461950  546345 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462038  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:28:10.462049  546345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.248µs
	I1202 22:28:10.462062  546345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:28:10.462074  546345 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462104  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:28:10.462109  546345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.282µs
	I1202 22:28:10.462115  546345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462125  546345 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462157  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:28:10.462162  546345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.727µs
	I1202 22:28:10.462169  546345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462179  546345 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462196  546345 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:28:10.462206  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:28:10.462212  546345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.534µs
	I1202 22:28:10.462218  546345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462227  546345 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462237  546345 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462253  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:28:10.462258  546345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.098µs
	I1202 22:28:10.462265  546345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462274  546345 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462280  546345 start.go:364] duration metric: took 29.16µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:28:10.462305  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:28:10.462305  546345 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:28:10.462319  546345 fix.go:54] fixHost starting: 
	I1202 22:28:10.462321  546345 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462350  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:28:10.462360  546345 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.731µs
	I1202 22:28:10.462365  546345 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:28:10.462378  546345 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462404  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:28:10.462408  546345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.396µs
	I1202 22:28:10.462414  546345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:28:10.462311  546345 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.21µs
	I1202 22:28:10.462504  546345 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:28:10.462515  546345 cache.go:87] Successfully saved all images to host disk.
	I1202 22:28:10.462628  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.483660  546345 fix.go:112] recreateIfNeeded on newest-cni-250247: state=Stopped err=<nil>
	W1202 22:28:10.483692  546345 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 22:28:08.293846  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:10.294170  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:10.487123  546345 out.go:252] * Restarting existing docker container for "newest-cni-250247" ...
	I1202 22:28:10.487212  546345 cli_runner.go:164] Run: docker start newest-cni-250247
	I1202 22:28:10.752920  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.774107  546345 kic.go:430] container "newest-cni-250247" state is running.
	I1202 22:28:10.775430  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:10.803310  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.803660  546345 machine.go:94] provisionDockerMachine start ...
	I1202 22:28:10.803741  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:10.835254  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:10.835574  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:10.835582  546345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:28:10.836341  546345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47630->127.0.0.1:33423: read: connection reset by peer
	I1202 22:28:13.985241  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:13.985267  546345 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:28:13.985331  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.004448  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.004830  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.004852  546345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:28:14.162890  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:14.162970  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.180049  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.180364  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.180385  546345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:28:14.325738  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:28:14.325762  546345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:28:14.325781  546345 ubuntu.go:190] setting up certificates
	I1202 22:28:14.325790  546345 provision.go:84] configureAuth start
	I1202 22:28:14.325861  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:14.342936  546345 provision.go:143] copyHostCerts
	I1202 22:28:14.343009  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:28:14.343017  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:28:14.343091  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:28:14.343188  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:28:14.343193  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:28:14.343217  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:28:14.343264  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:28:14.343269  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:28:14.343292  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:28:14.343342  546345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:28:14.770203  546345 provision.go:177] copyRemoteCerts
	I1202 22:28:14.770270  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:28:14.770310  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.787300  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:14.893004  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:28:14.909339  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:28:14.926255  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:28:14.942726  546345 provision.go:87] duration metric: took 616.921074ms to configureAuth
	I1202 22:28:14.942753  546345 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:28:14.942983  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:14.942996  546345 machine.go:97] duration metric: took 4.139308859s to provisionDockerMachine
	I1202 22:28:14.943006  546345 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:28:14.943017  546345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:28:14.943072  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:28:14.943129  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.960329  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.069600  546345 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:28:15.072888  546345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:28:15.072916  546345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:28:15.072928  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:28:15.073008  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:28:15.073125  546345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:28:15.073236  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:28:15.080571  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:15.098287  546345 start.go:296] duration metric: took 155.265122ms for postStartSetup
	I1202 22:28:15.098433  546345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:28:15.098514  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.116407  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.218632  546345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:28:15.223330  546345 fix.go:56] duration metric: took 4.761004698s for fixHost
	I1202 22:28:15.223357  546345 start.go:83] releasing machines lock for "newest-cni-250247", held for 4.761068204s
	I1202 22:28:15.223423  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:15.240165  546345 ssh_runner.go:195] Run: cat /version.json
	I1202 22:28:15.240226  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.240474  546345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:28:15.240537  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.266111  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.266672  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.465947  546345 ssh_runner.go:195] Run: systemctl --version
	I1202 22:28:15.472302  546345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:28:15.476459  546345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:28:15.476528  546345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:28:15.484047  546345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:28:15.484071  546345 start.go:496] detecting cgroup driver to use...
	I1202 22:28:15.484132  546345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:28:15.484196  546345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:28:15.501336  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:28:15.514809  546345 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:28:15.514870  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:28:15.529978  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:28:15.542949  546345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:28:15.646754  546345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:28:15.763470  546345 docker.go:234] disabling docker service ...
	I1202 22:28:15.763534  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:28:15.778139  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:28:15.790687  546345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:28:15.899099  546345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:28:16.013695  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:28:16.027166  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:28:16.044232  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:28:16.054377  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:28:16.064256  546345 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:28:16.064370  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:28:16.074182  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.083929  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:28:16.093428  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.103465  546345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:28:16.111974  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:28:16.120391  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:28:16.129324  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:28:16.138640  546345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:28:16.146079  546345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:28:16.153383  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.258631  546345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:28:16.349094  546345 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:28:16.349206  546345 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:28:16.353088  546345 start.go:564] Will wait 60s for crictl version
	I1202 22:28:16.353236  546345 ssh_runner.go:195] Run: which crictl
	I1202 22:28:16.356669  546345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:28:16.382942  546345 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:28:16.383050  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.402826  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.429935  546345 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:28:16.432731  546345 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:28:16.448989  546345 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:28:16.452808  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.464968  546345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 22:28:12.794132  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:15.294790  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:16.467854  546345 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:28:16.468035  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:16.468117  546345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:28:16.491782  546345 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:28:16.491805  546345 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:28:16.491813  546345 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:28:16.491914  546345 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:28:16.491984  546345 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:28:16.515416  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:16.515440  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:16.515457  546345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:28:16.515491  546345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:28:16.515606  546345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:28:16.515677  546345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:28:16.522844  546345 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:28:16.522912  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:28:16.529836  546345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:28:16.541819  546345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:28:16.553461  546345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:28:16.565531  546345 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:28:16.569041  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.578309  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.682927  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:16.699616  546345 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:28:16.699641  546345 certs.go:195] generating shared ca certs ...
	I1202 22:28:16.699658  546345 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:16.699787  546345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:28:16.699846  546345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:28:16.699857  546345 certs.go:257] generating profile certs ...
	I1202 22:28:16.699953  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:28:16.700029  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:28:16.700095  546345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:28:16.700208  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:28:16.700249  546345 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:28:16.700262  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:28:16.700295  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:28:16.700323  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:28:16.700356  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:28:16.700412  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:16.701077  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:28:16.721941  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:28:16.740644  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:28:16.759568  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:28:16.776264  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:28:16.794239  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:28:16.814293  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:28:16.833481  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:28:16.852733  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:28:16.870078  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:28:16.886149  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:28:16.902507  546345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:28:16.913942  546345 ssh_runner.go:195] Run: openssl version
	I1202 22:28:16.919938  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:28:16.927825  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931606  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931675  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.974237  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:28:16.981828  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:28:16.989638  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.992999  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.993061  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:28:17.033731  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:28:17.041307  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:28:17.049114  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052710  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052816  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.093368  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:28:17.101039  546345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:28:17.104530  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:28:17.145234  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:28:17.186252  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:28:17.227251  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:28:17.270184  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:28:17.315680  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:28:17.356357  546345 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:17.356449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:28:17.356551  546345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:28:17.384974  546345 cri.go:89] found id: ""
	I1202 22:28:17.385084  546345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:28:17.392914  546345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:28:17.392983  546345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:28:17.393055  546345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:28:17.400365  546345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:28:17.400969  546345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.401222  546345 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250247" cluster setting kubeconfig missing "newest-cni-250247" context setting]
	I1202 22:28:17.401752  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.403065  546345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:28:17.410696  546345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:28:17.410762  546345 kubeadm.go:602] duration metric: took 17.7594ms to restartPrimaryControlPlane
	I1202 22:28:17.410793  546345 kubeadm.go:403] duration metric: took 54.438388ms to StartCluster
	I1202 22:28:17.410829  546345 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.410902  546345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.412749  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.413013  546345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:28:17.416416  546345 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:28:17.416535  546345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-250247"
	I1202 22:28:17.416566  546345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-250247"
	I1202 22:28:17.416596  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:17.416607  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.416873  546345 addons.go:70] Setting dashboard=true in profile "newest-cni-250247"
	I1202 22:28:17.416893  546345 addons.go:239] Setting addon dashboard=true in "newest-cni-250247"
	W1202 22:28:17.416900  546345 addons.go:248] addon dashboard should already be in state true
	I1202 22:28:17.416923  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.417319  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.417762  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.418220  546345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-250247"
	I1202 22:28:17.418240  546345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250247"
	I1202 22:28:17.418515  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.421722  546345 out.go:179] * Verifying Kubernetes components...
	I1202 22:28:17.424546  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:17.473567  546345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:28:17.473567  546345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:28:17.475110  546345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-250247"
	I1202 22:28:17.475145  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.475548  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.477614  546345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.477633  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:28:17.477833  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.481801  546345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:28:17.489727  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:28:17.489757  546345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:28:17.489831  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.519689  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.519729  546345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.519742  546345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:28:17.519796  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.551180  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.565506  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.644850  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:17.726531  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.763912  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.792014  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:28:17.792042  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:28:17.824225  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:28:17.824250  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:28:17.838468  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:28:17.838492  546345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:28:17.851940  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:28:17.851965  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:28:17.864211  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:28:17.864276  546345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:28:17.876057  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:28:17.876079  546345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:28:17.887797  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:28:17.887867  546345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:28:17.899526  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:28:17.899547  546345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:28:17.911602  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:17.911626  546345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:28:17.923996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:18.303299  546345 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:28:18.303418  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:18.303565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303612  546345 retry.go:31] will retry after 133.710161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303717  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303748  546345 retry.go:31] will retry after 138.021594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303974  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.304008  546345 retry.go:31] will retry after 237.208538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.438371  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:18.442705  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:18.512074  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.512108  546345 retry.go:31] will retry after 489.996663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.521184  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.521218  546345 retry.go:31] will retry after 506.041741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.542348  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:18.605737  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.605775  546345 retry.go:31] will retry after 347.613617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.804191  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:18.953629  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:19.003207  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.021755  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.021793  546345 retry.go:31] will retry after 285.211473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.028084  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:19.152805  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.152839  546345 retry.go:31] will retry after 301.33995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:19.169007  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.169038  546345 retry.go:31] will retry after 787.522923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.304323  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.307756  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:19.364720  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.364752  546345 retry.go:31] will retry after 744.498002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.454779  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.514605  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.514684  546345 retry.go:31] will retry after 936.080491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.803793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.957439  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:17.793953  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.293990  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.022370  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.022406  546345 retry.go:31] will retry after 798.963887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.109555  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:20.176777  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.176873  546345 retry.go:31] will retry after 799.677911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.303906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.451319  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:20.513056  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.513087  546345 retry.go:31] will retry after 774.001274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.804493  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.822263  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.884574  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.884663  546345 retry.go:31] will retry after 1.794003449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.976884  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:21.043200  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.043233  546345 retry.go:31] will retry after 2.577364105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.287368  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:21.303812  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:21.396263  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.396297  546345 retry.go:31] will retry after 1.406655136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.803778  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.303682  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.678940  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.734117  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.734151  546345 retry.go:31] will retry after 2.241021271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.803453  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:22.803660  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:22.908987  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.909065  546345 retry.go:31] will retry after 2.592452064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.304587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:23.621298  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:23.681960  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.681992  546345 retry.go:31] will retry after 4.002263162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.804126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.303637  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.803614  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.976147  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.793981  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.036436  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.036470  546345 retry.go:31] will retry after 3.520246776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.303592  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:25.502542  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:25.567000  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.567033  546345 retry.go:31] will retry after 5.323254411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.804224  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.304369  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.303952  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.684919  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:27.748186  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.748220  546345 retry.go:31] will retry after 5.733866836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.804400  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.304209  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.556915  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:28.614437  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.614469  546345 retry.go:31] will retry after 5.59146354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.803555  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.303563  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.803564  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:27.794055  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:29.794270  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:32.293942  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:30.304278  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.891315  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:30.954133  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:30.954165  546345 retry.go:31] will retry after 6.008326018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:31.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:31.803766  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.304456  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.804272  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.304447  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.482755  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:33.544609  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.544640  546345 retry.go:31] will retry after 5.236447557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.804125  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.206989  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:34.267528  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.267562  546345 retry.go:31] will retry after 5.128568146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.804011  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:34.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:36.794018  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:35.304181  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.803881  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.304159  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.804539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.963637  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:37.037814  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.037848  546345 retry.go:31] will retry after 8.195284378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.304208  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:37.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.303552  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.781347  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:38.803757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:38.846454  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:38.846487  546345 retry.go:31] will retry after 10.92120738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.304100  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:39.396834  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:39.454859  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.454893  546345 retry.go:31] will retry after 6.04045657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.804469  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:39.293843  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:41.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:40.303596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.804541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.303922  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.803906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.304508  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.804313  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.304463  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.803539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.304169  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.803620  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:43.294289  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:45.294597  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:47.294896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:45.235996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:45.303907  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:45.410878  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.410909  546345 retry.go:31] will retry after 9.368309576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.496112  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:45.553672  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.553705  546345 retry.go:31] will retry after 7.750202952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.804015  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.303559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.804327  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.303603  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.804053  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.303550  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.803634  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.303688  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.768489  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:49.804064  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:49.895914  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:49.895948  546345 retry.go:31] will retry after 11.070404971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:49.794091  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:51.794902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:50.304462  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:50.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.304256  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.804118  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.304451  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.804096  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.303837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.304041  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:53.361880  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.361915  546345 retry.go:31] will retry after 21.51867829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.804496  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.303718  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.779367  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:54.803837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:54.852160  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:54.852195  546345 retry.go:31] will retry after 25.514460464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:54.293970  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:56.294081  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:55.303807  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:55.804288  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.304329  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.803616  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.303836  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.804152  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.304034  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.803992  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.304109  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.804084  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:58.793961  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:01.293995  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:00.305594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.803492  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.967275  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:01.023919  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.023952  546345 retry.go:31] will retry after 14.799716379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.304168  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:01.804346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.304261  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.803541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.304078  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.804260  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.304145  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:03.793972  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:05.794096  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:05.304303  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.804290  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.304157  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.804297  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.304486  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.803594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.303514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.803514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.304264  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.804046  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:08.294013  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:10.794007  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:10.304151  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.304108  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.803600  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.304520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.804189  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.304155  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.803517  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.304548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.803761  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.881559  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:14.937730  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:14.937760  546345 retry.go:31] will retry after 41.941175985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:13.294025  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:15.301168  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:15.316948  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.804548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.823888  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:15.884943  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.884976  546345 retry.go:31] will retry after 35.611848449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:16.303570  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:16.803687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.304005  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.804234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:17.804335  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:17.829227  546345 cri.go:89] found id: ""
	I1202 22:29:17.829257  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.829265  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:17.829272  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:17.829332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:17.853121  546345 cri.go:89] found id: ""
	I1202 22:29:17.853146  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.853154  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:17.853161  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:17.853219  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:17.877170  546345 cri.go:89] found id: ""
	I1202 22:29:17.877195  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.877204  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:17.877210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:17.877267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:17.904673  546345 cri.go:89] found id: ""
	I1202 22:29:17.904698  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.904707  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:17.904717  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:17.904784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:17.928244  546345 cri.go:89] found id: ""
	I1202 22:29:17.928284  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.928294  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:17.928301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:17.928363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:17.951262  546345 cri.go:89] found id: ""
	I1202 22:29:17.951283  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.951292  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:17.951299  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:17.951363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:17.979941  546345 cri.go:89] found id: ""
	I1202 22:29:17.979971  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.979980  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:17.979987  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:17.980046  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:18.014330  546345 cri.go:89] found id: ""
	I1202 22:29:18.014352  546345 logs.go:282] 0 containers: []
	W1202 22:29:18.014361  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:18.014370  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:18.014382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:18.070623  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:18.070659  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:18.086453  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:18.086483  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:18.147206  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:18.147229  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:18.147242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:18.171557  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:18.171592  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:17.794066  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:20.293905  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:22.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:20.367703  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:29:20.422565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.422597  546345 retry.go:31] will retry after 40.968515426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.701050  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:20.711132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:20.711213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:20.734019  546345 cri.go:89] found id: ""
	I1202 22:29:20.734042  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.734050  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:20.734057  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:20.734114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:20.756521  546345 cri.go:89] found id: ""
	I1202 22:29:20.756546  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.756554  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:20.756561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:20.756620  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:20.787826  546345 cri.go:89] found id: ""
	I1202 22:29:20.787852  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.787869  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:20.787876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:20.787939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:20.811402  546345 cri.go:89] found id: ""
	I1202 22:29:20.811427  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.811435  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:20.811441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:20.811500  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:20.835289  546345 cri.go:89] found id: ""
	I1202 22:29:20.835314  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.835322  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:20.835329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:20.835404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:20.858522  546345 cri.go:89] found id: ""
	I1202 22:29:20.858548  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.858556  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:20.858563  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:20.858622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:20.883759  546345 cri.go:89] found id: ""
	I1202 22:29:20.883783  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.883791  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:20.883798  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:20.883857  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:20.907968  546345 cri.go:89] found id: ""
	I1202 22:29:20.907992  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.908001  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:20.908010  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:20.908020  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:20.962992  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:20.963028  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:20.978472  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:20.978499  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:21.039749  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:21.039771  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:21.039784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:21.064157  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:21.064194  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:23.595745  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:23.606920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:23.606996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:23.633420  546345 cri.go:89] found id: ""
	I1202 22:29:23.633450  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.633459  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:23.633473  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:23.633532  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:23.659559  546345 cri.go:89] found id: ""
	I1202 22:29:23.659581  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.659590  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:23.659596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:23.659663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:23.684986  546345 cri.go:89] found id: ""
	I1202 22:29:23.685010  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.685031  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:23.685039  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:23.685099  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:23.709487  546345 cri.go:89] found id: ""
	I1202 22:29:23.709560  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.709583  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:23.709604  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:23.709734  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:23.734133  546345 cri.go:89] found id: ""
	I1202 22:29:23.734159  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.734167  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:23.734173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:23.734233  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:23.758126  546345 cri.go:89] found id: ""
	I1202 22:29:23.758190  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.758213  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:23.758234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:23.758327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:23.782448  546345 cri.go:89] found id: ""
	I1202 22:29:23.782471  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.782480  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:23.782505  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:23.782579  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:23.806736  546345 cri.go:89] found id: ""
	I1202 22:29:23.806761  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.806770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:23.806780  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:23.806790  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:23.865578  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:23.865619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:23.881434  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:23.881470  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:23.944584  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:23.944606  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:23.944619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:23.970159  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:23.970207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:24.793885  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:26.794021  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:26.498138  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:26.508783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:26.508852  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:26.537015  546345 cri.go:89] found id: ""
	I1202 22:29:26.537037  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.537046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:26.537053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:26.537110  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:26.574312  546345 cri.go:89] found id: ""
	I1202 22:29:26.574339  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.574347  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:26.574354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:26.574411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:26.629052  546345 cri.go:89] found id: ""
	I1202 22:29:26.629079  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.629087  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:26.629094  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:26.629150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:26.658217  546345 cri.go:89] found id: ""
	I1202 22:29:26.658251  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.658259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:26.658266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:26.658337  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:26.681717  546345 cri.go:89] found id: ""
	I1202 22:29:26.681751  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.681760  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:26.681778  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:26.681850  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:26.704611  546345 cri.go:89] found id: ""
	I1202 22:29:26.704646  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.704655  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:26.704661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:26.704733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:26.728028  546345 cri.go:89] found id: ""
	I1202 22:29:26.728091  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.728115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:26.728137  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:26.728223  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:26.755557  546345 cri.go:89] found id: ""
	I1202 22:29:26.755582  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.755590  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:26.755600  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:26.755611  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.786053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:26.786080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:26.841068  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:26.841100  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:26.856799  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:26.856829  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:26.924274  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:26.924338  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:26.924358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.449918  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:29.460186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:29.460259  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:29.483893  546345 cri.go:89] found id: ""
	I1202 22:29:29.483915  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.483924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:29.483930  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:29.483990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:29.507973  546345 cri.go:89] found id: ""
	I1202 22:29:29.507999  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.508007  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:29.508013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:29.508073  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:29.532020  546345 cri.go:89] found id: ""
	I1202 22:29:29.532045  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.532054  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:29.532061  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:29.532119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:29.583563  546345 cri.go:89] found id: ""
	I1202 22:29:29.583590  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.583599  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:29.583606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:29.583664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:29.626796  546345 cri.go:89] found id: ""
	I1202 22:29:29.626821  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.626830  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:29.626837  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:29.626910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:29.650151  546345 cri.go:89] found id: ""
	I1202 22:29:29.650179  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.650186  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:29.650193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:29.650254  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:29.677989  546345 cri.go:89] found id: ""
	I1202 22:29:29.678015  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.678023  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:29.678031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:29.678090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:29.707431  546345 cri.go:89] found id: ""
	I1202 22:29:29.707457  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.707465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:29.707475  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:29.707486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:29.773447  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:29.773470  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:29.773484  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.798530  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:29.798604  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:29.825490  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:29.825517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:29.884423  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:29.884461  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1202 22:29:28.794762  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:30.793709  539599 node_ready.go:38] duration metric: took 6m0.000289785s for node "no-preload-904303" to be "Ready" ...
	I1202 22:29:30.796935  539599 out.go:203] 
	W1202 22:29:30.799794  539599 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 22:29:30.799816  539599 out.go:285] * 
	W1202 22:29:30.802151  539599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:29:30.804961  539599 out.go:203] 
	I1202 22:29:32.401788  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:32.413697  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:32.413768  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:32.447463  546345 cri.go:89] found id: ""
	I1202 22:29:32.447486  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.447494  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:32.447501  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:32.447560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:32.480451  546345 cri.go:89] found id: ""
	I1202 22:29:32.480473  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.480481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:32.480487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:32.480543  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:32.518559  546345 cri.go:89] found id: ""
	I1202 22:29:32.518581  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.518590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:32.518596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:32.518652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:32.570716  546345 cri.go:89] found id: ""
	I1202 22:29:32.570737  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.570746  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:32.570752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:32.570809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:32.612686  546345 cri.go:89] found id: ""
	I1202 22:29:32.612722  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.612731  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:32.612738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:32.612797  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:32.651570  546345 cri.go:89] found id: ""
	I1202 22:29:32.651592  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.651600  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:32.651607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:32.651671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:32.679451  546345 cri.go:89] found id: ""
	I1202 22:29:32.679475  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.679484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:32.679490  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:32.679552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:32.705124  546345 cri.go:89] found id: ""
	I1202 22:29:32.705149  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.705170  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:32.705180  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:32.705193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:32.772557  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:32.772578  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:32.772590  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:32.798210  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:32.798246  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:32.826270  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:32.826298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:32.885460  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:32.885496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:35.401743  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:35.412979  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:35.413051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:35.438650  546345 cri.go:89] found id: ""
	I1202 22:29:35.438684  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.438703  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:35.438710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:35.438787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:35.467326  546345 cri.go:89] found id: ""
	I1202 22:29:35.467350  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.467358  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:35.467365  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:35.467444  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:35.492513  546345 cri.go:89] found id: ""
	I1202 22:29:35.492546  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.492554  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:35.492561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:35.492659  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:35.517758  546345 cri.go:89] found id: ""
	I1202 22:29:35.517785  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.517794  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:35.517801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:35.517861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:35.564303  546345 cri.go:89] found id: ""
	I1202 22:29:35.564329  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.564338  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:35.564345  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:35.564431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:35.610173  546345 cri.go:89] found id: ""
	I1202 22:29:35.610253  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.610289  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:35.610311  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:35.610412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:35.647481  546345 cri.go:89] found id: ""
	I1202 22:29:35.647545  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.647560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:35.647567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:35.647628  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:35.671535  546345 cri.go:89] found id: ""
	I1202 22:29:35.671561  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.671569  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:35.671579  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:35.671591  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:35.736069  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:35.736092  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:35.736106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:35.760759  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:35.760794  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:35.786652  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:35.786678  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:35.842999  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:35.843035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.358963  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:38.369060  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:38.369123  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:38.400304  546345 cri.go:89] found id: ""
	I1202 22:29:38.400330  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.400339  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:38.400351  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:38.400407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:38.424847  546345 cri.go:89] found id: ""
	I1202 22:29:38.424873  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.424881  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:38.424888  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:38.424946  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:38.452445  546345 cri.go:89] found id: ""
	I1202 22:29:38.452472  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.452481  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:38.452487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:38.452544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:38.480761  546345 cri.go:89] found id: ""
	I1202 22:29:38.480783  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.480804  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:38.480811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:38.480870  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:38.505019  546345 cri.go:89] found id: ""
	I1202 22:29:38.505044  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.505052  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:38.505059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:38.505116  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:38.528009  546345 cri.go:89] found id: ""
	I1202 22:29:38.528036  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.528045  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:38.528052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:38.528109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:38.595576  546345 cri.go:89] found id: ""
	I1202 22:29:38.595598  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.595606  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:38.595613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:38.595671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:38.638153  546345 cri.go:89] found id: ""
	I1202 22:29:38.638177  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.638186  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:38.638195  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:38.638206  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.653639  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:38.653696  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:38.715223  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:38.715245  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:38.715258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:38.739162  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:38.739196  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:38.766317  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:38.766345  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.321520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:41.331550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:41.331636  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:41.355934  546345 cri.go:89] found id: ""
	I1202 22:29:41.355959  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.355968  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:41.355975  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:41.356035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:41.381232  546345 cri.go:89] found id: ""
	I1202 22:29:41.381254  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.381263  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:41.381269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:41.381325  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:41.406147  546345 cri.go:89] found id: ""
	I1202 22:29:41.406171  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.406179  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:41.406186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:41.406246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:41.435516  546345 cri.go:89] found id: ""
	I1202 22:29:41.435542  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.435551  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:41.435559  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:41.435619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:41.460909  546345 cri.go:89] found id: ""
	I1202 22:29:41.460932  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.460941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:41.460948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:41.461035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:41.487520  546345 cri.go:89] found id: ""
	I1202 22:29:41.487553  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.487570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:41.487577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:41.487648  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:41.512354  546345 cri.go:89] found id: ""
	I1202 22:29:41.512425  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.512449  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:41.512469  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:41.512552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:41.536885  546345 cri.go:89] found id: ""
	I1202 22:29:41.536908  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.536917  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:41.536927  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:41.536938  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.607465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:41.607514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:41.635996  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:41.636025  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:41.712077  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:41.712100  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:41.712113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:41.736613  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:41.736660  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.265095  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:44.276615  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:44.276703  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:44.303304  546345 cri.go:89] found id: ""
	I1202 22:29:44.303325  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.303334  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:44.303340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:44.303403  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:44.333144  546345 cri.go:89] found id: ""
	I1202 22:29:44.333167  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.333176  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:44.333182  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:44.333258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:44.359646  546345 cri.go:89] found id: ""
	I1202 22:29:44.359675  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.359684  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:44.359691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:44.359751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:44.384230  546345 cri.go:89] found id: ""
	I1202 22:29:44.384255  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.384264  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:44.384270  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:44.384342  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:44.409648  546345 cri.go:89] found id: ""
	I1202 22:29:44.409701  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.409711  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:44.409718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:44.409776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:44.434410  546345 cri.go:89] found id: ""
	I1202 22:29:44.434437  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.434446  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:44.434452  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:44.434512  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:44.458352  546345 cri.go:89] found id: ""
	I1202 22:29:44.458376  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.458385  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:44.458392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:44.458465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:44.486353  546345 cri.go:89] found id: ""
	I1202 22:29:44.486385  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.486396  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:44.486420  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:44.486436  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:44.510698  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:44.510737  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.552264  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:44.552293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:44.660418  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:44.660451  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:44.676162  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:44.676230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:44.741313  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.241695  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:47.253909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:47.253977  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:47.280127  546345 cri.go:89] found id: ""
	I1202 22:29:47.280151  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.280159  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:47.280166  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:47.280227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:47.309688  546345 cri.go:89] found id: ""
	I1202 22:29:47.309711  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.309719  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:47.309726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:47.309795  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:47.334233  546345 cri.go:89] found id: ""
	I1202 22:29:47.334259  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.334268  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:47.334275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:47.334330  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:47.363203  546345 cri.go:89] found id: ""
	I1202 22:29:47.363228  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.363237  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:47.363245  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:47.363314  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:47.390073  546345 cri.go:89] found id: ""
	I1202 22:29:47.390096  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.390104  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:47.390111  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:47.390168  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:47.416413  546345 cri.go:89] found id: ""
	I1202 22:29:47.416435  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.416444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:47.416451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:47.416518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:47.440718  546345 cri.go:89] found id: ""
	I1202 22:29:47.440743  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.440753  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:47.440759  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:47.440818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:47.463875  546345 cri.go:89] found id: ""
	I1202 22:29:47.463901  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.463910  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:47.463920  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:47.463931  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:47.492814  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:47.492842  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:47.558225  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:47.558264  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:47.574145  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:47.574174  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:47.666298  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.666357  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:47.666385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.191511  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:50.202178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:50.202258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:50.229173  546345 cri.go:89] found id: ""
	I1202 22:29:50.229213  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.229222  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:50.229228  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:50.229293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:50.253932  546345 cri.go:89] found id: ""
	I1202 22:29:50.253962  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.253971  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:50.253977  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:50.254033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:50.278257  546345 cri.go:89] found id: ""
	I1202 22:29:50.278280  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.278289  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:50.278296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:50.278351  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:50.306884  546345 cri.go:89] found id: ""
	I1202 22:29:50.306907  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.306914  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:50.306921  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:50.306989  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:50.331454  546345 cri.go:89] found id: ""
	I1202 22:29:50.331528  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.331553  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:50.331566  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:50.331658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:50.355157  546345 cri.go:89] found id: ""
	I1202 22:29:50.355230  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.355254  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:50.355268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:50.355346  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:50.380390  546345 cri.go:89] found id: ""
	I1202 22:29:50.380415  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.380424  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:50.380430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:50.380518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:50.408708  546345 cri.go:89] found id: ""
	I1202 22:29:50.408733  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.408742  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:50.408751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:50.408800  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:50.466607  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:50.466641  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:50.482087  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:50.482154  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:50.548310  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:50.548334  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:50.548347  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.581455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:50.581492  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:51.497099  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:51.556470  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:51.556588  546345 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:53.133025  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:53.143115  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:53.143180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:53.166147  546345 cri.go:89] found id: ""
	I1202 22:29:53.166169  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.166177  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:53.166183  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:53.166251  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:53.191215  546345 cri.go:89] found id: ""
	I1202 22:29:53.191238  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.191247  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:53.191253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:53.191329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:53.214527  546345 cri.go:89] found id: ""
	I1202 22:29:53.214593  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.214616  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:53.214631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:53.214701  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:53.239062  546345 cri.go:89] found id: ""
	I1202 22:29:53.239089  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.239098  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:53.239105  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:53.239270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:53.269346  546345 cri.go:89] found id: ""
	I1202 22:29:53.269416  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.269440  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:53.269462  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:53.269571  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:53.293728  546345 cri.go:89] found id: ""
	I1202 22:29:53.293802  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.293825  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:53.293845  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:53.293942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:53.322079  546345 cri.go:89] found id: ""
	I1202 22:29:53.322106  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.322115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:53.322121  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:53.322180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:53.345988  546345 cri.go:89] found id: ""
	I1202 22:29:53.346055  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.346079  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:53.346103  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:53.346128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:53.402872  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:53.402909  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:53.418121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:53.418150  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:53.480652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:53.480725  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:53.480756  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:53.505378  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:53.505414  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.037255  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:56.048340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:56.048412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:56.080851  546345 cri.go:89] found id: ""
	I1202 22:29:56.080878  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.080888  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:56.080894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:56.080963  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:56.105446  546345 cri.go:89] found id: ""
	I1202 22:29:56.105472  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.105481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:56.105488  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:56.105545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:56.131318  546345 cri.go:89] found id: ""
	I1202 22:29:56.131344  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.131352  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:56.131358  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:56.131414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:56.159096  546345 cri.go:89] found id: ""
	I1202 22:29:56.159118  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.159126  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:56.159132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:56.159191  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:56.183173  546345 cri.go:89] found id: ""
	I1202 22:29:56.183199  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.183207  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:56.183214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:56.183279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:56.207984  546345 cri.go:89] found id: ""
	I1202 22:29:56.208017  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.208029  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:56.208035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:56.208095  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:56.232594  546345 cri.go:89] found id: ""
	I1202 22:29:56.232617  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.232625  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:56.232632  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:56.232699  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:56.257221  546345 cri.go:89] found id: ""
	I1202 22:29:56.257247  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.257256  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:56.257265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:56.257278  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.283035  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:56.283061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:56.339962  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:56.339997  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:56.355699  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:56.355773  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:56.414625  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:56.414693  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:56.414738  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:56.879279  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:56.938440  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:56.938561  546345 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:58.938802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:58.951366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:58.951487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:58.978893  546345 cri.go:89] found id: ""
	I1202 22:29:58.978916  546345 logs.go:282] 0 containers: []
	W1202 22:29:58.978924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:58.978931  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:58.978990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:59.005270  546345 cri.go:89] found id: ""
	I1202 22:29:59.005299  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.005309  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:59.005316  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:59.005396  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:59.029424  546345 cri.go:89] found id: ""
	I1202 22:29:59.029453  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.029461  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:59.029468  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:59.029525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:59.053363  546345 cri.go:89] found id: ""
	I1202 22:29:59.053398  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.053407  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:59.053414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:59.053481  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:59.078974  546345 cri.go:89] found id: ""
	I1202 22:29:59.079051  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.079073  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:59.079088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:59.079162  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:59.103336  546345 cri.go:89] found id: ""
	I1202 22:29:59.103358  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.103366  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:59.103383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:59.103441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:59.127855  546345 cri.go:89] found id: ""
	I1202 22:29:59.127929  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.127952  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:59.127972  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:59.128077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:59.151167  546345 cri.go:89] found id: ""
	I1202 22:29:59.151196  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.151204  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:59.151213  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:59.151224  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:59.208516  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:59.208559  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:59.224755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:59.224780  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:59.286748  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:59.286772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:59.286787  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:59.311855  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:59.311889  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:01.391459  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:30:01.475431  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:30:01.475652  546345 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:30:01.478828  546345 out.go:179] * Enabled addons: 
	I1202 22:30:01.482057  546345 addons.go:530] duration metric: took 1m44.065625472s for enable addons: enabled=[]
	I1202 22:30:01.843006  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:01.854584  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:01.854684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:01.885465  546345 cri.go:89] found id: ""
	I1202 22:30:01.885501  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.885510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:01.885517  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:01.885587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:01.917316  546345 cri.go:89] found id: ""
	I1202 22:30:01.917348  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.917359  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:01.917366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:01.917463  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:01.943052  546345 cri.go:89] found id: ""
	I1202 22:30:01.943078  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.943086  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:01.943093  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:01.943153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:01.969294  546345 cri.go:89] found id: ""
	I1202 22:30:01.969321  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.969330  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:01.969339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:01.969402  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:01.996336  546345 cri.go:89] found id: ""
	I1202 22:30:01.996405  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.996428  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:01.996449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:01.996537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:02.025075  546345 cri.go:89] found id: ""
	I1202 22:30:02.025158  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.025183  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:02.025203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:02.025300  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:02.078384  546345 cri.go:89] found id: ""
	I1202 22:30:02.078450  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.078474  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:02.078493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:02.078585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:02.124922  546345 cri.go:89] found id: ""
	I1202 22:30:02.125001  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.125021  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:02.125031  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:02.125044  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:02.197595  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:02.197618  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:02.197634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:02.223170  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:02.223203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:02.255281  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:02.255348  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:02.310654  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:02.310690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:04.828623  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:04.839157  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:04.839282  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:04.863863  546345 cri.go:89] found id: ""
	I1202 22:30:04.863887  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.863896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:04.863903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:04.863996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:04.890006  546345 cri.go:89] found id: ""
	I1202 22:30:04.890031  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.890040  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:04.890047  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:04.890146  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:04.915998  546345 cri.go:89] found id: ""
	I1202 22:30:04.916021  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.916035  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:04.916042  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:04.916100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:04.940395  546345 cri.go:89] found id: ""
	I1202 22:30:04.940420  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.940429  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:04.940435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:04.940495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:04.964621  546345 cri.go:89] found id: ""
	I1202 22:30:04.964650  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.964660  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:04.964667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:04.964737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:04.989632  546345 cri.go:89] found id: ""
	I1202 22:30:04.989685  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.989694  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:04.989702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:04.989760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:05.019501  546345 cri.go:89] found id: ""
	I1202 22:30:05.019528  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.019537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:05.019545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:05.019610  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:05.049637  546345 cri.go:89] found id: ""
	I1202 22:30:05.049682  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.049690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:05.049700  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:05.049711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:05.088244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:05.088281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:05.133381  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:05.133409  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:05.194841  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:05.194874  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:05.210533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:05.210560  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:05.273348  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:07.774501  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:07.784828  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:07.784927  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:07.814568  546345 cri.go:89] found id: ""
	I1202 22:30:07.814610  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.814619  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:07.814627  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:07.814711  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:07.839281  546345 cri.go:89] found id: ""
	I1202 22:30:07.839306  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.839325  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:07.839333  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:07.839410  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:07.863734  546345 cri.go:89] found id: ""
	I1202 22:30:07.863756  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.863764  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:07.863771  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:07.863830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:07.887517  546345 cri.go:89] found id: ""
	I1202 22:30:07.887541  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.887549  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:07.887556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:07.887615  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:07.912577  546345 cri.go:89] found id: ""
	I1202 22:30:07.912599  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.912608  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:07.912614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:07.912684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:07.937037  546345 cri.go:89] found id: ""
	I1202 22:30:07.937062  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.937071  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:07.937088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:07.937153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:07.961873  546345 cri.go:89] found id: ""
	I1202 22:30:07.961901  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.961910  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:07.961916  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:07.961974  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:07.985864  546345 cri.go:89] found id: ""
	I1202 22:30:07.985890  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.985906  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:07.985917  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:07.985928  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:08.011244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:08.011284  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:08.055290  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:08.055321  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:08.134015  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:08.134069  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:08.154013  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:08.154041  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:08.223778  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:10.723964  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:10.736098  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:10.736214  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:10.761205  546345 cri.go:89] found id: ""
	I1202 22:30:10.761227  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.761236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:10.761243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:10.761303  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:10.785829  546345 cri.go:89] found id: ""
	I1202 22:30:10.785856  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.785865  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:10.785872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:10.785931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:10.815724  546345 cri.go:89] found id: ""
	I1202 22:30:10.815748  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.815757  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:10.815767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:10.815844  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:10.840563  546345 cri.go:89] found id: ""
	I1202 22:30:10.840586  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.840594  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:10.840601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:10.840667  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:10.869275  546345 cri.go:89] found id: ""
	I1202 22:30:10.869349  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.869372  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:10.869391  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:10.869478  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:10.894450  546345 cri.go:89] found id: ""
	I1202 22:30:10.894477  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.894486  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:10.894493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:10.894572  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:10.919134  546345 cri.go:89] found id: ""
	I1202 22:30:10.919161  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.919170  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:10.919177  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:10.919238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:10.944009  546345 cri.go:89] found id: ""
	I1202 22:30:10.944035  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.944044  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:10.944053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:10.944066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:11.000144  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:11.000183  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:11.018501  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:11.018532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:11.149770  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:11.149837  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:11.149860  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:11.175018  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:11.175055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.702967  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:13.713482  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:13.713560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:13.739844  546345 cri.go:89] found id: ""
	I1202 22:30:13.739867  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.739876  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:13.739886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:13.739943  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:13.765162  546345 cri.go:89] found id: ""
	I1202 22:30:13.765184  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.765192  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:13.765199  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:13.765256  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:13.790968  546345 cri.go:89] found id: ""
	I1202 22:30:13.790991  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.790999  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:13.791005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:13.791069  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:13.816755  546345 cri.go:89] found id: ""
	I1202 22:30:13.816791  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.816799  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:13.816806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:13.816869  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:13.843444  546345 cri.go:89] found id: ""
	I1202 22:30:13.843469  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.843477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:13.843484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:13.843551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:13.868489  546345 cri.go:89] found id: ""
	I1202 22:30:13.868514  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.868523  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:13.868530  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:13.868608  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:13.893527  546345 cri.go:89] found id: ""
	I1202 22:30:13.893552  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.893560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:13.893567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:13.893624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:13.919358  546345 cri.go:89] found id: ""
	I1202 22:30:13.919382  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.919390  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:13.919400  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:13.919411  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.946818  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:13.946846  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:14.004198  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:14.004294  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:14.021120  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:14.021157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:14.145347  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:14.145369  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:14.145382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:16.669687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:16.680323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:16.680426  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:16.705295  546345 cri.go:89] found id: ""
	I1202 22:30:16.705320  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.705329  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:16.705335  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:16.705394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:16.729538  546345 cri.go:89] found id: ""
	I1202 22:30:16.729633  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.729648  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:16.729682  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:16.729766  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:16.754022  546345 cri.go:89] found id: ""
	I1202 22:30:16.754045  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.754053  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:16.754059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:16.754119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:16.780138  546345 cri.go:89] found id: ""
	I1202 22:30:16.780163  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.780171  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:16.780178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:16.780237  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:16.805096  546345 cri.go:89] found id: ""
	I1202 22:30:16.805123  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.805134  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:16.805141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:16.805201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:16.830436  546345 cri.go:89] found id: ""
	I1202 22:30:16.830461  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.830470  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:16.830477  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:16.830537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:16.859101  546345 cri.go:89] found id: ""
	I1202 22:30:16.859126  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.859135  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:16.859142  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:16.859201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:16.884001  546345 cri.go:89] found id: ""
	I1202 22:30:16.884025  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.884033  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:16.884043  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:16.884054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:16.919216  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:16.919242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:16.974540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:16.974574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:16.990333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:16.990361  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:17.096330  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:17.096351  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:17.096363  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:19.641119  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:19.651302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:19.651372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:19.675892  546345 cri.go:89] found id: ""
	I1202 22:30:19.675920  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.675929  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:19.675935  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:19.675993  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:19.700442  546345 cri.go:89] found id: ""
	I1202 22:30:19.700472  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.700480  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:19.700487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:19.700545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:19.724905  546345 cri.go:89] found id: ""
	I1202 22:30:19.724933  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.724941  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:19.724948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:19.725008  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:19.749042  546345 cri.go:89] found id: ""
	I1202 22:30:19.749064  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.749072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:19.749079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:19.749142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:19.772319  546345 cri.go:89] found id: ""
	I1202 22:30:19.772346  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.772354  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:19.772361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:19.772423  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:19.796590  546345 cri.go:89] found id: ""
	I1202 22:30:19.796661  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.796685  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:19.796706  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:19.796791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:19.820897  546345 cri.go:89] found id: ""
	I1202 22:30:19.820971  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.820994  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:19.821013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:19.821097  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:19.845057  546345 cri.go:89] found id: ""
	I1202 22:30:19.845127  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.845151  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:19.845173  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:19.845210  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:19.901157  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:19.901190  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:19.916681  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:19.916709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:19.978835  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:19.978855  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:19.978868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:20.003532  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:20.003576  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:22.540194  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:22.550669  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:22.550752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:22.575137  546345 cri.go:89] found id: ""
	I1202 22:30:22.575162  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.575179  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:22.575186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:22.575246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:22.600172  546345 cri.go:89] found id: ""
	I1202 22:30:22.600199  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.600208  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:22.600214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:22.600280  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:22.627626  546345 cri.go:89] found id: ""
	I1202 22:30:22.627652  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.627661  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:22.627667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:22.627727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:22.652380  546345 cri.go:89] found id: ""
	I1202 22:30:22.652407  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.652416  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:22.652422  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:22.652483  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:22.679899  546345 cri.go:89] found id: ""
	I1202 22:30:22.679924  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.679933  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:22.679939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:22.679999  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:22.704508  546345 cri.go:89] found id: ""
	I1202 22:30:22.704533  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.704542  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:22.704548  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:22.704623  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:22.729344  546345 cri.go:89] found id: ""
	I1202 22:30:22.729372  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.729380  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:22.729387  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:22.729451  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:22.753872  546345 cri.go:89] found id: ""
	I1202 22:30:22.753899  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.753908  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:22.753918  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:22.753929  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:22.810619  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:22.810654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:22.826861  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:22.826887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:22.891768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:22.891788  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:22.891801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:22.915527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:22.915563  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.443424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:25.454070  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:25.454140  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:25.477867  546345 cri.go:89] found id: ""
	I1202 22:30:25.477888  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.477896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:25.477902  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:25.477961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:25.503405  546345 cri.go:89] found id: ""
	I1202 22:30:25.503440  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.503449  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:25.503456  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:25.503548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:25.528678  546345 cri.go:89] found id: ""
	I1202 22:30:25.528703  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.528711  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:25.528718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:25.528784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:25.555479  546345 cri.go:89] found id: ""
	I1202 22:30:25.555505  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.555513  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:25.555520  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:25.555587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:25.588375  546345 cri.go:89] found id: ""
	I1202 22:30:25.588398  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.588408  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:25.588415  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:25.588475  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:25.613403  546345 cri.go:89] found id: ""
	I1202 22:30:25.613488  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.613511  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:25.613532  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:25.613627  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:25.644249  546345 cri.go:89] found id: ""
	I1202 22:30:25.644273  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.644282  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:25.644289  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:25.644348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:25.669360  546345 cri.go:89] found id: ""
	I1202 22:30:25.669385  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.669394  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:25.669432  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:25.669448  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.701067  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:25.701095  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:25.755359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:25.755393  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:25.771118  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:25.771147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:25.830809  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:25.830832  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:25.830845  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.355998  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:28.366515  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:28.366588  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:28.391593  546345 cri.go:89] found id: ""
	I1202 22:30:28.391618  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.391627  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:28.391634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:28.391694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:28.420025  546345 cri.go:89] found id: ""
	I1202 22:30:28.420051  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.420060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:28.420073  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:28.420137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:28.444623  546345 cri.go:89] found id: ""
	I1202 22:30:28.444647  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.444655  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:28.444662  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:28.444726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:28.469992  546345 cri.go:89] found id: ""
	I1202 22:30:28.470015  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.470024  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:28.470030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:28.470089  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:28.495503  546345 cri.go:89] found id: ""
	I1202 22:30:28.495580  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.495602  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:28.495616  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:28.495687  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:28.520105  546345 cri.go:89] found id: ""
	I1202 22:30:28.520130  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.520139  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:28.520145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:28.520207  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:28.547412  546345 cri.go:89] found id: ""
	I1202 22:30:28.547444  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.547454  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:28.547460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:28.547522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:28.572324  546345 cri.go:89] found id: ""
	I1202 22:30:28.572349  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.572358  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:28.572367  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:28.572379  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:28.587929  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:28.587952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:28.651756  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:28.651790  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:28.651803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.676386  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:28.676421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:28.708051  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:28.708079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.265370  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:31.275659  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:31.275728  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:31.335888  546345 cri.go:89] found id: ""
	I1202 22:30:31.335928  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.335956  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:31.335970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:31.336049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:31.386854  546345 cri.go:89] found id: ""
	I1202 22:30:31.386880  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.386888  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:31.386895  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:31.386979  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:31.410707  546345 cri.go:89] found id: ""
	I1202 22:30:31.410731  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.410739  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:31.410746  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:31.410804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:31.439172  546345 cri.go:89] found id: ""
	I1202 22:30:31.439239  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.439263  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:31.439276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:31.439355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:31.467199  546345 cri.go:89] found id: ""
	I1202 22:30:31.467277  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.467293  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:31.467301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:31.467390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:31.495081  546345 cri.go:89] found id: ""
	I1202 22:30:31.495155  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.495178  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:31.495193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:31.495270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:31.518280  546345 cri.go:89] found id: ""
	I1202 22:30:31.518306  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.518315  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:31.518323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:31.518400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:31.543715  546345 cri.go:89] found id: ""
	I1202 22:30:31.543757  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.543793  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:31.543809  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:31.543821  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.601359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:31.601392  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:31.617291  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:31.617323  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:31.682689  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:31.682713  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:31.682727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:31.706626  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:31.706661  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:34.235905  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:34.246438  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:34.246560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:34.271279  546345 cri.go:89] found id: ""
	I1202 22:30:34.271350  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.271365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:34.271374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:34.271434  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:34.303460  546345 cri.go:89] found id: ""
	I1202 22:30:34.303498  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.303507  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:34.303513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:34.303635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:34.355759  546345 cri.go:89] found id: ""
	I1202 22:30:34.355786  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.355795  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:34.355801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:34.355908  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:34.402466  546345 cri.go:89] found id: ""
	I1202 22:30:34.402553  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.402572  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:34.402580  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:34.402654  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:34.431909  546345 cri.go:89] found id: ""
	I1202 22:30:34.431932  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.431941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:34.431947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:34.432004  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:34.455451  546345 cri.go:89] found id: ""
	I1202 22:30:34.455476  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.455484  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:34.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:34.455632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:34.478771  546345 cri.go:89] found id: ""
	I1202 22:30:34.478797  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.478805  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:34.478812  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:34.478904  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:34.502377  546345 cri.go:89] found id: ""
	I1202 22:30:34.502452  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.502468  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:34.502479  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:34.502490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:34.559881  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:34.559925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:34.576755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:34.576785  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:34.640203  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:34.640223  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:34.640236  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:34.664331  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:34.664368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:37.198596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:37.208910  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:37.208981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:37.233321  546345 cri.go:89] found id: ""
	I1202 22:30:37.233346  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.233354  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:37.233361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:37.233419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:37.259307  546345 cri.go:89] found id: ""
	I1202 22:30:37.259331  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.259340  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:37.259346  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:37.259404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:37.282333  546345 cri.go:89] found id: ""
	I1202 22:30:37.282358  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.282367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:37.282373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:37.282430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:37.351993  546345 cri.go:89] found id: ""
	I1202 22:30:37.352018  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.352027  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:37.352034  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:37.352124  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:37.398805  546345 cri.go:89] found id: ""
	I1202 22:30:37.398829  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.398840  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:37.398847  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:37.398912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:37.422987  546345 cri.go:89] found id: ""
	I1202 22:30:37.423010  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.423019  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:37.423026  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:37.423100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:37.447502  546345 cri.go:89] found id: ""
	I1202 22:30:37.447528  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.447537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:37.447544  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:37.447630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:37.471899  546345 cri.go:89] found id: ""
	I1202 22:30:37.471934  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.471943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:37.471952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:37.471963  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:37.528313  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:37.528350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:37.544433  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:37.544464  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:37.611970  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:37.611994  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:37.612007  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:37.636937  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:37.636971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:40.165587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:40.177235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:40.177323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:40.205543  546345 cri.go:89] found id: ""
	I1202 22:30:40.205568  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.205576  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:40.205583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:40.205644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:40.232642  546345 cri.go:89] found id: ""
	I1202 22:30:40.232668  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.232677  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:40.232684  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:40.232746  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:40.259447  546345 cri.go:89] found id: ""
	I1202 22:30:40.259482  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.259496  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:40.259503  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:40.259591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:40.297166  546345 cri.go:89] found id: ""
	I1202 22:30:40.297190  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.297198  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:40.297205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:40.297268  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:40.337983  546345 cri.go:89] found id: ""
	I1202 22:30:40.338005  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.338014  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:40.338020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:40.338079  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:40.380237  546345 cri.go:89] found id: ""
	I1202 22:30:40.380266  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.380274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:40.380282  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:40.380343  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:40.412498  546345 cri.go:89] found id: ""
	I1202 22:30:40.412563  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.412572  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:40.412579  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:40.412637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:40.441910  546345 cri.go:89] found id: ""
	I1202 22:30:40.441934  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.441943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:40.441952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:40.441969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:40.496209  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:40.496245  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:40.512922  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:40.512953  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:40.580850  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:40.580875  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:40.580887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:40.605967  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:40.606001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:43.139166  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:43.149443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:43.149516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:43.177064  546345 cri.go:89] found id: ""
	I1202 22:30:43.177091  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.177099  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:43.177106  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:43.177164  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:43.201811  546345 cri.go:89] found id: ""
	I1202 22:30:43.201837  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.201845  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:43.201852  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:43.201912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:43.225492  546345 cri.go:89] found id: ""
	I1202 22:30:43.225520  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.225529  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:43.225536  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:43.225594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:43.249036  546345 cri.go:89] found id: ""
	I1202 22:30:43.249064  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.249072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:43.249079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:43.249139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:43.277251  546345 cri.go:89] found id: ""
	I1202 22:30:43.277276  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.277285  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:43.277297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:43.277354  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:43.314360  546345 cri.go:89] found id: ""
	I1202 22:30:43.314396  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.314406  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:43.314413  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:43.314488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:43.374629  546345 cri.go:89] found id: ""
	I1202 22:30:43.374657  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.374666  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:43.374672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:43.374730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:43.416766  546345 cri.go:89] found id: ""
	I1202 22:30:43.416794  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.416803  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:43.416812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:43.416823  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:43.471606  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:43.471644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:43.487334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:43.487362  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:43.553915  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:43.553939  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:43.553952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:43.579222  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:43.579258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.107248  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:46.118081  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:46.118150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:46.142754  546345 cri.go:89] found id: ""
	I1202 22:30:46.142781  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.142789  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:46.142796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:46.142861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:46.169825  546345 cri.go:89] found id: ""
	I1202 22:30:46.169849  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.169858  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:46.169864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:46.169929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:46.196691  546345 cri.go:89] found id: ""
	I1202 22:30:46.196719  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.196728  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:46.196734  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:46.196796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:46.221449  546345 cri.go:89] found id: ""
	I1202 22:30:46.221476  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.221485  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:46.221492  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:46.221552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:46.246043  546345 cri.go:89] found id: ""
	I1202 22:30:46.246108  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.246131  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:46.246145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:46.246227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:46.271663  546345 cri.go:89] found id: ""
	I1202 22:30:46.271687  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.271695  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:46.271702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:46.271760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:46.315379  546345 cri.go:89] found id: ""
	I1202 22:30:46.315404  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.315413  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:46.315420  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:46.315477  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:46.359855  546345 cri.go:89] found id: ""
	I1202 22:30:46.359883  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.359893  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:46.359903  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:46.359915  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:46.377127  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:46.377158  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:46.445559  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:46.445583  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:46.445605  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:46.473713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:46.473754  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.501189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:46.501221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.058128  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:49.068126  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:49.068198  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:49.092263  546345 cri.go:89] found id: ""
	I1202 22:30:49.092288  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.092297  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:49.092303  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:49.092360  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:49.115983  546345 cri.go:89] found id: ""
	I1202 22:30:49.116008  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.116017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:49.116024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:49.116081  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:49.139874  546345 cri.go:89] found id: ""
	I1202 22:30:49.139899  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.139908  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:49.139915  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:49.139971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:49.164359  546345 cri.go:89] found id: ""
	I1202 22:30:49.164388  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.164397  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:49.164404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:49.164485  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:49.189339  546345 cri.go:89] found id: ""
	I1202 22:30:49.189365  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.189374  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:49.189383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:49.189440  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:49.213800  546345 cri.go:89] found id: ""
	I1202 22:30:49.213826  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.213835  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:49.213842  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:49.213899  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:49.238436  546345 cri.go:89] found id: ""
	I1202 22:30:49.238463  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.238473  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:49.238480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:49.238540  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:49.267385  546345 cri.go:89] found id: ""
	I1202 22:30:49.267459  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.267483  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:49.267500  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:49.267523  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.332624  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:49.332664  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:49.365875  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:49.365902  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:49.443796  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:49.443869  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:49.443888  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:49.467900  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:49.467933  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:51.996457  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:52.009596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:52.009694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:52.037146  546345 cri.go:89] found id: ""
	I1202 22:30:52.037172  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.037190  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:52.037197  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:52.037257  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:52.063683  546345 cri.go:89] found id: ""
	I1202 22:30:52.063708  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.063717  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:52.063724  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:52.063786  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:52.089573  546345 cri.go:89] found id: ""
	I1202 22:30:52.089598  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.089606  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:52.089613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:52.089704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:52.114785  546345 cri.go:89] found id: ""
	I1202 22:30:52.114810  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.114819  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:52.114826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:52.114884  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:52.137456  546345 cri.go:89] found id: ""
	I1202 22:30:52.137479  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.137489  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:52.137495  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:52.137552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:52.161392  546345 cri.go:89] found id: ""
	I1202 22:30:52.161418  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.161426  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:52.161433  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:52.161544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:52.186619  546345 cri.go:89] found id: ""
	I1202 22:30:52.186640  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.186648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:52.186658  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:52.186717  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:52.211047  546345 cri.go:89] found id: ""
	I1202 22:30:52.211069  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.211077  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:52.211086  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:52.211097  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:52.240049  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:52.240079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:52.297727  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:52.297804  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:52.326988  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:52.327061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:52.421545  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:52.421566  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:52.421578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:54.945402  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:54.955618  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:54.955688  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:54.981106  546345 cri.go:89] found id: ""
	I1202 22:30:54.981132  546345 logs.go:282] 0 containers: []
	W1202 22:30:54.981140  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:54.981147  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:54.981210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:55.017766  546345 cri.go:89] found id: ""
	I1202 22:30:55.017789  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.017798  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:55.017805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:55.017886  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:55.051218  546345 cri.go:89] found id: ""
	I1202 22:30:55.051293  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.051320  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:55.051342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:55.051449  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:55.092842  546345 cri.go:89] found id: ""
	I1202 22:30:55.092869  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.092879  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:55.092886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:55.092955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:55.131432  546345 cri.go:89] found id: ""
	I1202 22:30:55.131517  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.131546  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:55.131570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:55.131702  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:55.166611  546345 cri.go:89] found id: ""
	I1202 22:30:55.166639  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.166653  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:55.166661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:55.166737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:55.197157  546345 cri.go:89] found id: ""
	I1202 22:30:55.197183  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.197199  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:55.197206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:55.197277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:55.229014  546345 cri.go:89] found id: ""
	I1202 22:30:55.229045  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.229053  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:55.229062  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:55.229074  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:55.284839  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:55.284877  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:55.312855  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:55.312884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:55.414558  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:55.414580  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:55.414595  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:55.439435  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:55.439472  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:57.966587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:57.977332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:57.977425  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:58.009109  546345 cri.go:89] found id: ""
	I1202 22:30:58.009146  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.009155  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:58.009162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:58.009277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:58.034957  546345 cri.go:89] found id: ""
	I1202 22:30:58.034980  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.034989  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:58.034996  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:58.035075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:58.059651  546345 cri.go:89] found id: ""
	I1202 22:30:58.059677  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.059687  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:58.059694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:58.059754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:58.092476  546345 cri.go:89] found id: ""
	I1202 22:30:58.092510  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.092520  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:58.092527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:58.092601  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:58.116505  546345 cri.go:89] found id: ""
	I1202 22:30:58.116531  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.116539  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:58.116545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:58.116617  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:58.141152  546345 cri.go:89] found id: ""
	I1202 22:30:58.141180  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.141189  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:58.141196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:58.141252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:58.167272  546345 cri.go:89] found id: ""
	I1202 22:30:58.167294  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.167302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:58.167308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:58.167365  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:58.193236  546345 cri.go:89] found id: ""
	I1202 22:30:58.193311  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.193334  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:58.193351  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:58.193374  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:58.248292  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:58.248365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:58.263580  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:58.263610  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:58.374750  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:58.374772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:58.374784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:58.401522  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:58.401558  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:00.931781  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:00.941965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:00.942042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:00.966926  546345 cri.go:89] found id: ""
	I1202 22:31:00.966950  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.966958  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:00.966965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:00.967026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:00.991438  546345 cri.go:89] found id: ""
	I1202 22:31:00.991463  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.991472  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:00.991479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:00.991538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:01.019713  546345 cri.go:89] found id: ""
	I1202 22:31:01.019737  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.019745  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:01.019752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:01.019809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:01.044143  546345 cri.go:89] found id: ""
	I1202 22:31:01.044166  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.044174  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:01.044181  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:01.044240  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:01.069071  546345 cri.go:89] found id: ""
	I1202 22:31:01.069094  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.069102  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:01.069109  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:01.069170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:01.097613  546345 cri.go:89] found id: ""
	I1202 22:31:01.097639  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.097648  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:01.097688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:01.097754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:01.124227  546345 cri.go:89] found id: ""
	I1202 22:31:01.124251  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.124260  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:01.124267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:01.124329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:01.150457  546345 cri.go:89] found id: ""
	I1202 22:31:01.150483  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.150491  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:01.150501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:01.150512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:01.175721  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:01.175753  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:01.204876  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:01.204907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:01.261532  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:01.261567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:01.277504  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:01.277531  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:01.369721  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:03.870061  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:03.880451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:03.880522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:03.903663  546345 cri.go:89] found id: ""
	I1202 22:31:03.903688  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.903698  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:03.903704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:03.903767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:03.927883  546345 cri.go:89] found id: ""
	I1202 22:31:03.927904  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.927913  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:03.927920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:03.927982  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:03.952301  546345 cri.go:89] found id: ""
	I1202 22:31:03.952324  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.952332  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:03.952339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:03.952397  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:03.977367  546345 cri.go:89] found id: ""
	I1202 22:31:03.977390  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.977399  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:03.977406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:03.977465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:04.003308  546345 cri.go:89] found id: ""
	I1202 22:31:04.003336  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.003347  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:04.003361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:04.003438  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:04.030694  546345 cri.go:89] found id: ""
	I1202 22:31:04.030718  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.030731  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:04.030738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:04.030812  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:04.056404  546345 cri.go:89] found id: ""
	I1202 22:31:04.056430  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.056439  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:04.056446  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:04.056506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:04.081740  546345 cri.go:89] found id: ""
	I1202 22:31:04.081762  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.081770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:04.081779  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:04.081792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:04.109259  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:04.109285  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:04.165104  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:04.165137  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:04.181694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:04.181725  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:04.241465  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:04.241493  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:04.241506  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:06.766561  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:06.777372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:06.777445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:06.807209  546345 cri.go:89] found id: ""
	I1202 22:31:06.807235  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.807244  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:06.807251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:06.807356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:06.833401  546345 cri.go:89] found id: ""
	I1202 22:31:06.833424  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.833433  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:06.833439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:06.833497  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:06.858407  546345 cri.go:89] found id: ""
	I1202 22:31:06.858434  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.858442  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:06.858449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:06.858509  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:06.884341  546345 cri.go:89] found id: ""
	I1202 22:31:06.884367  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.884375  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:06.884382  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:06.884445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:06.911764  546345 cri.go:89] found id: ""
	I1202 22:31:06.911787  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.911796  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:06.911802  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:06.911861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:06.940179  546345 cri.go:89] found id: ""
	I1202 22:31:06.940204  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.940217  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:06.940225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:06.940289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:06.965277  546345 cri.go:89] found id: ""
	I1202 22:31:06.965304  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.965313  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:06.965320  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:06.965390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:06.991270  546345 cri.go:89] found id: ""
	I1202 22:31:06.991294  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.991303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:06.991313  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:06.991326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:07.060741  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:07.060762  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:07.060778  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:07.085921  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:07.085970  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:07.113268  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:07.113298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:07.169055  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:07.169092  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:09.686487  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:09.697143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:09.697217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:09.722726  546345 cri.go:89] found id: ""
	I1202 22:31:09.722749  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.722760  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:09.722767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:09.722826  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:09.748226  546345 cri.go:89] found id: ""
	I1202 22:31:09.748251  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.748260  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:09.748267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:09.748327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:09.774010  546345 cri.go:89] found id: ""
	I1202 22:31:09.774035  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.774043  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:09.774050  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:09.774109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:09.800227  546345 cri.go:89] found id: ""
	I1202 22:31:09.800250  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.800259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:09.800266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:09.800328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:09.828744  546345 cri.go:89] found id: ""
	I1202 22:31:09.828768  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.828777  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:09.828784  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:09.828843  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:09.853554  546345 cri.go:89] found id: ""
	I1202 22:31:09.853577  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.853586  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:09.853593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:09.853672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:09.879248  546345 cri.go:89] found id: ""
	I1202 22:31:09.879271  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.879279  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:09.879285  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:09.879350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:09.908338  546345 cri.go:89] found id: ""
	I1202 22:31:09.908364  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.908373  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:09.908383  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:09.908394  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:09.936944  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:09.936974  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:09.993598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:09.993644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:10.010732  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:10.010766  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:10.084652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:10.084677  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:10.084692  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:12.613817  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:12.624680  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:12.624765  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:12.651202  546345 cri.go:89] found id: ""
	I1202 22:31:12.651227  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.651236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:12.651243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:12.651301  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:12.676106  546345 cri.go:89] found id: ""
	I1202 22:31:12.676130  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.676138  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:12.676145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:12.676202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:12.700680  546345 cri.go:89] found id: ""
	I1202 22:31:12.700706  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.700716  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:12.700723  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:12.700787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:12.726023  546345 cri.go:89] found id: ""
	I1202 22:31:12.726049  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.726059  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:12.726066  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:12.726126  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:12.750927  546345 cri.go:89] found id: ""
	I1202 22:31:12.750951  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.750959  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:12.750966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:12.751026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:12.777535  546345 cri.go:89] found id: ""
	I1202 22:31:12.777562  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.777570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:12.777577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:12.777634  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:12.801546  546345 cri.go:89] found id: ""
	I1202 22:31:12.801572  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.801581  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:12.801588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:12.801646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:12.829909  546345 cri.go:89] found id: ""
	I1202 22:31:12.829932  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.829941  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:12.829950  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:12.829961  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:12.859869  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:12.859896  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:12.914732  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:12.914767  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:12.930844  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:12.930875  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:12.995842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:12.995865  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:12.995879  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.522875  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:15.533513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:15.533591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:15.569400  546345 cri.go:89] found id: ""
	I1202 22:31:15.569424  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.569433  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:15.569439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:15.569496  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:15.628130  546345 cri.go:89] found id: ""
	I1202 22:31:15.628152  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.628161  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:15.628167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:15.628228  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:15.653054  546345 cri.go:89] found id: ""
	I1202 22:31:15.653076  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.653085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:15.653092  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:15.653149  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:15.678257  546345 cri.go:89] found id: ""
	I1202 22:31:15.678281  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.678290  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:15.678296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:15.678353  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:15.702830  546345 cri.go:89] found id: ""
	I1202 22:31:15.702856  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.702864  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:15.702871  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:15.702936  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:15.728236  546345 cri.go:89] found id: ""
	I1202 22:31:15.728261  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.728270  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:15.728276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:15.728336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:15.753646  546345 cri.go:89] found id: ""
	I1202 22:31:15.753694  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.753703  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:15.753710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:15.753772  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:15.778069  546345 cri.go:89] found id: ""
	I1202 22:31:15.778092  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.778101  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:15.778110  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:15.778121  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:15.834182  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:15.834217  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:15.850533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:15.850572  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:15.911589  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:15.911609  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:15.911621  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.936945  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:15.936977  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.470112  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:18.480648  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:18.480727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:18.508083  546345 cri.go:89] found id: ""
	I1202 22:31:18.508109  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.508117  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:18.508124  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:18.508252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:18.533123  546345 cri.go:89] found id: ""
	I1202 22:31:18.533149  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.533164  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:18.533172  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:18.533245  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:18.586767  546345 cri.go:89] found id: ""
	I1202 22:31:18.586791  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.586800  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:18.586806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:18.586866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:18.626205  546345 cri.go:89] found id: ""
	I1202 22:31:18.626227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.626236  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:18.626242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:18.626299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:18.653977  546345 cri.go:89] found id: ""
	I1202 22:31:18.653998  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.654007  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:18.654013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:18.654074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:18.679194  546345 cri.go:89] found id: ""
	I1202 22:31:18.679227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.679237  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:18.679244  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:18.679305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:18.704215  546345 cri.go:89] found id: ""
	I1202 22:31:18.704280  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.704305  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:18.704326  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:18.704411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:18.729467  546345 cri.go:89] found id: ""
	I1202 22:31:18.729536  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.729560  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:18.729583  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:18.729624  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:18.745333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:18.745406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:18.810842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:18.810886  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:18.810899  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:18.836014  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:18.836050  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.864189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:18.864230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.420147  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:21.430404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:21.430516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:21.454558  546345 cri.go:89] found id: ""
	I1202 22:31:21.454583  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.454592  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:21.454599  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:21.454658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:21.478328  546345 cri.go:89] found id: ""
	I1202 22:31:21.478360  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.478369  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:21.478377  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:21.478445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:21.502704  546345 cri.go:89] found id: ""
	I1202 22:31:21.502729  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.502737  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:21.502744  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:21.502805  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:21.528175  546345 cri.go:89] found id: ""
	I1202 22:31:21.528201  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.528209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:21.528216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:21.528278  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:21.616594  546345 cri.go:89] found id: ""
	I1202 22:31:21.616622  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.616632  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:21.616638  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:21.616697  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:21.645131  546345 cri.go:89] found id: ""
	I1202 22:31:21.645160  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.645168  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:21.645178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:21.645238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:21.671523  546345 cri.go:89] found id: ""
	I1202 22:31:21.671545  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.671553  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:21.671564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:21.671624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:21.695173  546345 cri.go:89] found id: ""
	I1202 22:31:21.695195  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.695203  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:21.695212  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:21.695222  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:21.719757  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:21.719792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:21.749635  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:21.749681  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.808026  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:21.808062  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:21.823780  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:21.823809  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:21.884457  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.384744  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:24.394799  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:24.394871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:24.420708  546345 cri.go:89] found id: ""
	I1202 22:31:24.420731  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.420740  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:24.420747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:24.420804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:24.444913  546345 cri.go:89] found id: ""
	I1202 22:31:24.444938  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.444947  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:24.444953  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:24.445011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:24.468474  546345 cri.go:89] found id: ""
	I1202 22:31:24.468562  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.468586  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:24.468619  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:24.468712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:24.492364  546345 cri.go:89] found id: ""
	I1202 22:31:24.492435  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.492459  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:24.492479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:24.492570  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:24.517358  546345 cri.go:89] found id: ""
	I1202 22:31:24.517434  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.517473  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:24.517498  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:24.517589  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:24.556717  546345 cri.go:89] found id: ""
	I1202 22:31:24.556800  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.556829  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:24.556870  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:24.556990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:24.641499  546345 cri.go:89] found id: ""
	I1202 22:31:24.641533  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.641542  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:24.641549  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:24.641704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:24.665998  546345 cri.go:89] found id: ""
	I1202 22:31:24.666024  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.666032  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:24.666041  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:24.666053  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:24.720801  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:24.720835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:24.736228  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:24.736255  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:24.802911  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.802934  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:24.802948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:24.826675  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:24.826710  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:27.352424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:27.363728  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:27.363800  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:27.388330  546345 cri.go:89] found id: ""
	I1202 22:31:27.388356  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.388365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:27.388372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:27.388430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:27.412561  546345 cri.go:89] found id: ""
	I1202 22:31:27.412589  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.412598  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:27.412605  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:27.412664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:27.436953  546345 cri.go:89] found id: ""
	I1202 22:31:27.436982  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.436991  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:27.436997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:27.437057  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:27.461746  546345 cri.go:89] found id: ""
	I1202 22:31:27.461775  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.461783  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:27.461790  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:27.461847  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:27.489561  546345 cri.go:89] found id: ""
	I1202 22:31:27.489598  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.489607  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:27.489614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:27.489708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:27.517814  546345 cri.go:89] found id: ""
	I1202 22:31:27.517835  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.517844  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:27.517851  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:27.517909  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:27.545611  546345 cri.go:89] found id: ""
	I1202 22:31:27.545711  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.545734  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:27.545754  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:27.545839  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:27.603444  546345 cri.go:89] found id: ""
	I1202 22:31:27.603466  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.603474  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:27.603484  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:27.603497  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:27.674112  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:27.674149  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:27.690096  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:27.690128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:27.752579  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:27.752604  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:27.752617  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:27.777612  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:27.777647  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.305694  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:30.316225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:30.316348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:30.339916  546345 cri.go:89] found id: ""
	I1202 22:31:30.339950  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.339959  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:30.339974  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:30.340052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:30.369549  546345 cri.go:89] found id: ""
	I1202 22:31:30.369575  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.369584  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:30.369590  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:30.369677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:30.394634  546345 cri.go:89] found id: ""
	I1202 22:31:30.394711  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.394734  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:30.394749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:30.394830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:30.419244  546345 cri.go:89] found id: ""
	I1202 22:31:30.419271  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.419279  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:30.419286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:30.419344  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:30.447382  546345 cri.go:89] found id: ""
	I1202 22:31:30.447414  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.447423  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:30.447430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:30.447530  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:30.471131  546345 cri.go:89] found id: ""
	I1202 22:31:30.471155  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.471163  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:30.471170  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:30.471236  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:30.496091  546345 cri.go:89] found id: ""
	I1202 22:31:30.496116  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.496125  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:30.496132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:30.496209  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:30.520739  546345 cri.go:89] found id: ""
	I1202 22:31:30.520767  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.520775  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:30.520785  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:30.520796  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:30.549966  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:30.550055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.602152  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:30.602176  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:30.668135  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:30.668172  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:30.683585  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:30.683653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:30.747838  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:33.249502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:33.259480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:33.259551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:33.282766  546345 cri.go:89] found id: ""
	I1202 22:31:33.282791  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.282799  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:33.282806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:33.282866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:33.308495  546345 cri.go:89] found id: ""
	I1202 22:31:33.308518  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.308533  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:33.308540  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:33.308597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:33.331979  546345 cri.go:89] found id: ""
	I1202 22:31:33.332013  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.332023  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:33.332030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:33.332100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:33.356278  546345 cri.go:89] found id: ""
	I1202 22:31:33.356304  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.356313  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:33.356319  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:33.356378  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:33.384857  546345 cri.go:89] found id: ""
	I1202 22:31:33.384885  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.384893  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:33.384900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:33.384959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:33.409699  546345 cri.go:89] found id: ""
	I1202 22:31:33.409727  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.409735  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:33.409742  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:33.409818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:33.433952  546345 cri.go:89] found id: ""
	I1202 22:31:33.433976  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.433984  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:33.433991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:33.434048  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:33.457225  546345 cri.go:89] found id: ""
	I1202 22:31:33.457250  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.457265  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:33.457274  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:33.457286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:33.481072  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:33.481106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:33.513367  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:33.513402  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:33.575454  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:33.575500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:33.611865  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:33.611895  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:33.687837  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.188106  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:36.198524  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:36.198595  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:36.227262  546345 cri.go:89] found id: ""
	I1202 22:31:36.227286  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.227294  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:36.227301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:36.227364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:36.251229  546345 cri.go:89] found id: ""
	I1202 22:31:36.251254  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.251262  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:36.251269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:36.251328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:36.280094  546345 cri.go:89] found id: ""
	I1202 22:31:36.280118  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.280128  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:36.280135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:36.280192  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:36.303557  546345 cri.go:89] found id: ""
	I1202 22:31:36.303589  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.303598  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:36.303606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:36.303680  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:36.328036  546345 cri.go:89] found id: ""
	I1202 22:31:36.328099  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.328110  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:36.328117  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:36.328210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:36.352844  546345 cri.go:89] found id: ""
	I1202 22:31:36.352919  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.352942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:36.352963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:36.353076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:36.377059  546345 cri.go:89] found id: ""
	I1202 22:31:36.377123  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.377148  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:36.377169  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:36.377299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:36.406912  546345 cri.go:89] found id: ""
	I1202 22:31:36.406939  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.406947  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:36.406957  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:36.406969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:36.462620  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:36.462655  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:36.478602  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:36.478633  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:36.553409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.553440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:36.553453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:36.605527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:36.605567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.147765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:39.158330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:39.158399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:39.184185  546345 cri.go:89] found id: ""
	I1202 22:31:39.184211  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.184220  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:39.184227  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:39.184286  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:39.211366  546345 cri.go:89] found id: ""
	I1202 22:31:39.211390  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.211399  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:39.211405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:39.211465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:39.239810  546345 cri.go:89] found id: ""
	I1202 22:31:39.239836  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.239846  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:39.239853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:39.239914  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:39.264259  546345 cri.go:89] found id: ""
	I1202 22:31:39.264285  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.264294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:39.264300  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:39.264357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:39.288356  546345 cri.go:89] found id: ""
	I1202 22:31:39.288384  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.288394  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:39.288400  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:39.288459  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:39.312721  546345 cri.go:89] found id: ""
	I1202 22:31:39.312745  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.312754  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:39.312760  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:39.312817  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:39.337724  546345 cri.go:89] found id: ""
	I1202 22:31:39.337748  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.337756  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:39.337762  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:39.337821  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:39.362280  546345 cri.go:89] found id: ""
	I1202 22:31:39.362303  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.362311  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:39.362320  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:39.362332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.389401  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:39.389425  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:39.449427  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:39.449471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:39.464867  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:39.464897  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:39.527654  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:39.527675  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:39.527691  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.058126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:42.070220  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:42.070305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:42.113161  546345 cri.go:89] found id: ""
	I1202 22:31:42.113187  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.113197  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:42.113205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:42.113279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:42.151146  546345 cri.go:89] found id: ""
	I1202 22:31:42.151178  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.151188  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:42.151195  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:42.151267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:42.187923  546345 cri.go:89] found id: ""
	I1202 22:31:42.187951  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.187960  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:42.187968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:42.188040  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:42.222980  546345 cri.go:89] found id: ""
	I1202 22:31:42.223003  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.223012  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:42.223020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:42.223088  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:42.271018  546345 cri.go:89] found id: ""
	I1202 22:31:42.271046  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.271056  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:42.271064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:42.271136  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:42.302817  546345 cri.go:89] found id: ""
	I1202 22:31:42.302893  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.302913  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:42.302929  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:42.303020  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:42.333498  546345 cri.go:89] found id: ""
	I1202 22:31:42.333526  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.333535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:42.333543  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:42.333630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:42.363457  546345 cri.go:89] found id: ""
	I1202 22:31:42.363485  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.363495  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:42.363505  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:42.363518  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:42.421844  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:42.421883  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:42.439113  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:42.439145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:42.506768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:42.506791  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:42.506803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.531455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:42.531491  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:45.076035  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:45.089323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:45.089414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:45.121410  546345 cri.go:89] found id: ""
	I1202 22:31:45.121436  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.121445  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:45.121454  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:45.121523  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:45.158421  546345 cri.go:89] found id: ""
	I1202 22:31:45.158452  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.158461  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:45.158840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:45.158933  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:45.226744  546345 cri.go:89] found id: ""
	I1202 22:31:45.226769  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.226778  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:45.226785  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:45.226855  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:45.277464  546345 cri.go:89] found id: ""
	I1202 22:31:45.277540  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.277560  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:45.277573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:45.277920  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:45.321564  546345 cri.go:89] found id: ""
	I1202 22:31:45.321591  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.321600  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:45.321607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:45.321695  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:45.349203  546345 cri.go:89] found id: ""
	I1202 22:31:45.349228  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.349236  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:45.349243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:45.349302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:45.377967  546345 cri.go:89] found id: ""
	I1202 22:31:45.377993  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.378001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:45.378009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:45.378068  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:45.404655  546345 cri.go:89] found id: ""
	I1202 22:31:45.404680  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.404689  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:45.404697  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:45.404709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:45.459390  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:45.459424  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:45.474938  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:45.474964  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:45.551857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:45.551880  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:45.551893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:45.599545  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:45.599577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.142376  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:48.152835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:48.152910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:48.176887  546345 cri.go:89] found id: ""
	I1202 22:31:48.176913  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.176921  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:48.176928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:48.176992  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:48.199841  546345 cri.go:89] found id: ""
	I1202 22:31:48.199865  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.199873  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:48.199879  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:48.199937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:48.223323  546345 cri.go:89] found id: ""
	I1202 22:31:48.223346  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.223354  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:48.223361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:48.223419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:48.246053  546345 cri.go:89] found id: ""
	I1202 22:31:48.246079  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.246088  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:48.246095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:48.246152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:48.269713  546345 cri.go:89] found id: ""
	I1202 22:31:48.269739  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.269748  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:48.269755  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:48.269811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:48.295336  546345 cri.go:89] found id: ""
	I1202 22:31:48.295359  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.295368  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:48.295374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:48.295435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:48.318964  546345 cri.go:89] found id: ""
	I1202 22:31:48.318989  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.319001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:48.319009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:48.319114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:48.342776  546345 cri.go:89] found id: ""
	I1202 22:31:48.342803  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.342812  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:48.342821  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:48.342834  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:48.366473  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:48.366507  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.397880  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:48.397907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:48.453030  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:48.453066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:48.468428  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:48.468455  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:48.530252  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.030539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:51.041072  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:51.041139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:51.064958  546345 cri.go:89] found id: ""
	I1202 22:31:51.064986  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.064994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:51.065004  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:51.065074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:51.090247  546345 cri.go:89] found id: ""
	I1202 22:31:51.090275  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.090284  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:51.090290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:51.090356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:51.117187  546345 cri.go:89] found id: ""
	I1202 22:31:51.117224  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.117235  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:51.117242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:51.117326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:51.143456  546345 cri.go:89] found id: ""
	I1202 22:31:51.143483  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.143492  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:51.143499  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:51.143563  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:51.169463  546345 cri.go:89] found id: ""
	I1202 22:31:51.169542  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.169565  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:51.169587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:51.169719  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:51.195977  546345 cri.go:89] found id: ""
	I1202 22:31:51.196019  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.196028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:51.196035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:51.196105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:51.221006  546345 cri.go:89] found id: ""
	I1202 22:31:51.221030  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.221045  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:51.221051  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:51.221119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:51.245434  546345 cri.go:89] found id: ""
	I1202 22:31:51.245457  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.245466  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:51.245475  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:51.245486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:51.273171  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:51.273198  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:51.328523  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:51.328562  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:51.344211  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:51.344238  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:51.405812  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.405843  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:51.405859  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:53.930346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:53.940572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:53.940646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:53.968500  546345 cri.go:89] found id: ""
	I1202 22:31:53.968531  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.968540  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:53.968547  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:53.968605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:53.993271  546345 cri.go:89] found id: ""
	I1202 22:31:53.993298  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.993306  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:53.993314  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:53.993372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:54.020928  546345 cri.go:89] found id: ""
	I1202 22:31:54.020956  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.020965  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:54.020973  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:54.021039  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:54.047236  546345 cri.go:89] found id: ""
	I1202 22:31:54.047260  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.047269  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:54.047276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:54.047336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:54.072186  546345 cri.go:89] found id: ""
	I1202 22:31:54.072219  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.072228  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:54.072235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:54.072310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:54.097358  546345 cri.go:89] found id: ""
	I1202 22:31:54.097390  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.097400  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:54.097407  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:54.097484  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:54.122635  546345 cri.go:89] found id: ""
	I1202 22:31:54.122739  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.122765  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:54.122787  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:54.122881  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:54.147140  546345 cri.go:89] found id: ""
	I1202 22:31:54.147205  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.147228  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:54.147244  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:54.147257  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:54.209277  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:54.209298  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:54.209312  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:54.233525  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:54.233564  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:54.267595  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:54.267623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:54.322957  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:54.322991  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:56.839135  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:56.854872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:56.854954  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:56.883302  546345 cri.go:89] found id: ""
	I1202 22:31:56.883327  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.883335  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:56.883342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:56.883400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:56.909437  546345 cri.go:89] found id: ""
	I1202 22:31:56.909478  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.909495  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:56.909502  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:56.909574  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:56.935567  546345 cri.go:89] found id: ""
	I1202 22:31:56.935592  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.935600  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:56.935607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:56.935700  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:56.962296  546345 cri.go:89] found id: ""
	I1202 22:31:56.962322  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.962339  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:56.962352  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:56.962417  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:56.987308  546345 cri.go:89] found id: ""
	I1202 22:31:56.987333  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.987341  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:56.987348  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:56.987409  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:57.017409  546345 cri.go:89] found id: ""
	I1202 22:31:57.017436  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.017444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:57.017451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:57.017519  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:57.043570  546345 cri.go:89] found id: ""
	I1202 22:31:57.043593  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.043601  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:57.043607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:57.043670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:57.068973  546345 cri.go:89] found id: ""
	I1202 22:31:57.069005  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.069014  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:57.069023  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:57.069034  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:57.093239  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:57.093275  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:57.120751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:57.120777  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:57.176173  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:57.176209  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:57.193001  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:57.193035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:57.259032  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:59.760716  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:59.771290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:59.771364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:59.819477  546345 cri.go:89] found id: ""
	I1202 22:31:59.819507  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.819521  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:59.819528  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:59.819609  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:59.879132  546345 cri.go:89] found id: ""
	I1202 22:31:59.879159  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.879168  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:59.879175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:59.879235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:59.909985  546345 cri.go:89] found id: ""
	I1202 22:31:59.910011  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.910020  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:59.910027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:59.910083  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:59.934326  546345 cri.go:89] found id: ""
	I1202 22:31:59.934350  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.934359  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:59.934366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:59.934424  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:59.963200  546345 cri.go:89] found id: ""
	I1202 22:31:59.963224  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.963233  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:59.963240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:59.963327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:59.989148  546345 cri.go:89] found id: ""
	I1202 22:31:59.989180  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.989190  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:59.989196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:59.989302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:00.074954  546345 cri.go:89] found id: ""
	I1202 22:32:00.075036  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.075063  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:00.075085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:00.075215  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:00.226233  546345 cri.go:89] found id: ""
	I1202 22:32:00.226259  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.226269  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:00.226279  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:00.226293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:00.336324  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:00.336441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:00.371299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:00.371905  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:00.484267  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:00.484297  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:00.484311  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:00.512091  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:00.512128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.068479  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:03.078801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:03.078893  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:03.103732  546345 cri.go:89] found id: ""
	I1202 22:32:03.103758  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.103766  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:03.103773  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:03.103832  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:03.128397  546345 cri.go:89] found id: ""
	I1202 22:32:03.128426  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.128435  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:03.128441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:03.128501  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:03.153803  546345 cri.go:89] found id: ""
	I1202 22:32:03.153877  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.153899  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:03.153913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:03.153988  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:03.181014  546345 cri.go:89] found id: ""
	I1202 22:32:03.181038  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.181047  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:03.181053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:03.181152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:03.210807  546345 cri.go:89] found id: ""
	I1202 22:32:03.210834  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.210843  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:03.210850  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:03.210911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:03.239226  546345 cri.go:89] found id: ""
	I1202 22:32:03.239251  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.239260  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:03.239267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:03.239326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:03.263944  546345 cri.go:89] found id: ""
	I1202 22:32:03.263969  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.263978  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:03.263984  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:03.264044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:03.287558  546345 cri.go:89] found id: ""
	I1202 22:32:03.287583  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.287592  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:03.287601  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:03.287612  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:03.311743  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:03.311776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.343056  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:03.343083  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:03.397595  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:03.397629  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:03.413119  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:03.413155  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:03.475280  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:05.975590  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:05.985554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:05.985622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:06.019132  546345 cri.go:89] found id: ""
	I1202 22:32:06.019157  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.019166  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:06.019173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:06.019241  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:06.044254  546345 cri.go:89] found id: ""
	I1202 22:32:06.044277  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.044286  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:06.044293  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:06.044357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:06.073518  546345 cri.go:89] found id: ""
	I1202 22:32:06.073541  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.073550  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:06.073556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:06.073619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:06.103333  546345 cri.go:89] found id: ""
	I1202 22:32:06.103400  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.103431  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:06.103450  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:06.103539  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:06.129000  546345 cri.go:89] found id: ""
	I1202 22:32:06.129036  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.129051  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:06.129058  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:06.129128  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:06.155243  546345 cri.go:89] found id: ""
	I1202 22:32:06.155266  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.155274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:06.155281  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:06.155341  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:06.183834  546345 cri.go:89] found id: ""
	I1202 22:32:06.183900  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.183923  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:06.183942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:06.184033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:06.208508  546345 cri.go:89] found id: ""
	I1202 22:32:06.208546  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.208556  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:06.208566  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:06.208578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:06.265928  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:06.265966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:06.281782  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:06.281811  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:06.341568  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:06.341591  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:06.341603  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:06.366403  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:06.366435  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:08.899765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:08.910234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:08.910306  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:08.940951  546345 cri.go:89] found id: ""
	I1202 22:32:08.940979  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.940989  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:08.940995  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:08.941054  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:08.966172  546345 cri.go:89] found id: ""
	I1202 22:32:08.966198  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.966207  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:08.966214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:08.966274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:08.990534  546345 cri.go:89] found id: ""
	I1202 22:32:08.990561  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.990569  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:08.990576  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:08.990633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:09.016942  546345 cri.go:89] found id: ""
	I1202 22:32:09.016970  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.016979  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:09.016986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:09.017052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:09.040852  546345 cri.go:89] found id: ""
	I1202 22:32:09.040893  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.040902  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:09.040909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:09.040978  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:09.064884  546345 cri.go:89] found id: ""
	I1202 22:32:09.064958  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.064986  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:09.065005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:09.065114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:09.088807  546345 cri.go:89] found id: ""
	I1202 22:32:09.088878  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.088903  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:09.088922  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:09.089011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:09.115024  546345 cri.go:89] found id: ""
	I1202 22:32:09.115051  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.115060  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:09.115069  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:09.115080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:09.138651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:09.138687  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:09.165425  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:09.165449  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:09.222720  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:09.222752  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:09.238413  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:09.238441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:09.299159  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:11.799390  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:11.813803  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:11.813890  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:11.853262  546345 cri.go:89] found id: ""
	I1202 22:32:11.853298  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.853311  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:11.853318  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:11.853394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:11.898451  546345 cri.go:89] found id: ""
	I1202 22:32:11.898474  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.898482  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:11.898489  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:11.898549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:11.926743  546345 cri.go:89] found id: ""
	I1202 22:32:11.926817  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.926840  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:11.926860  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:11.926980  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:11.950985  546345 cri.go:89] found id: ""
	I1202 22:32:11.951011  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.951019  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:11.951027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:11.951106  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:11.975373  546345 cri.go:89] found id: ""
	I1202 22:32:11.975399  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.975407  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:11.975414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:11.975490  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:12.005482  546345 cri.go:89] found id: ""
	I1202 22:32:12.005511  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.005521  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:12.005529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:12.005643  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:12.032572  546345 cri.go:89] found id: ""
	I1202 22:32:12.032597  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.032607  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:12.032634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:12.032733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:12.059401  546345 cri.go:89] found id: ""
	I1202 22:32:12.059476  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.059492  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:12.059504  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:12.059517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:12.093142  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:12.093179  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:12.150021  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:12.150054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:12.165956  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:12.165987  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:12.231857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:12.231929  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:12.231956  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:14.756725  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:14.767263  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:14.767333  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:14.801672  546345 cri.go:89] found id: ""
	I1202 22:32:14.801697  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.801706  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:14.801713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:14.801770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:14.851488  546345 cri.go:89] found id: ""
	I1202 22:32:14.851517  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.851532  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:14.851538  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:14.851605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:14.888023  546345 cri.go:89] found id: ""
	I1202 22:32:14.888048  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.888057  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:14.888064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:14.888129  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:14.916001  546345 cri.go:89] found id: ""
	I1202 22:32:14.916053  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.916061  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:14.916068  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:14.916135  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:14.942133  546345 cri.go:89] found id: ""
	I1202 22:32:14.942199  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.942222  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:14.942240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:14.942326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:14.967663  546345 cri.go:89] found id: ""
	I1202 22:32:14.967694  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.967702  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:14.967710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:14.967779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:14.997283  546345 cri.go:89] found id: ""
	I1202 22:32:14.997360  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.997398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:14.997424  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:14.997514  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:15.028362  546345 cri.go:89] found id: ""
	I1202 22:32:15.028443  546345 logs.go:282] 0 containers: []
	W1202 22:32:15.028481  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:15.028510  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:15.028577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:15.084989  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:15.085026  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:15.101099  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:15.101135  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:15.163640  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:15.163661  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:15.163673  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:15.188815  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:15.188850  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:17.720502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:17.730835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:17.730906  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:17.754959  546345 cri.go:89] found id: ""
	I1202 22:32:17.754985  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.754994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:17.755001  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:17.755058  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:17.779124  546345 cri.go:89] found id: ""
	I1202 22:32:17.779145  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.779153  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:17.779159  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:17.779216  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:17.861624  546345 cri.go:89] found id: ""
	I1202 22:32:17.861647  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.861670  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:17.861676  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:17.861733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:17.891578  546345 cri.go:89] found id: ""
	I1202 22:32:17.891604  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.891612  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:17.891620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:17.891677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:17.914983  546345 cri.go:89] found id: ""
	I1202 22:32:17.915005  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.915013  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:17.915019  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:17.915075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:17.938893  546345 cri.go:89] found id: ""
	I1202 22:32:17.938923  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.938932  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:17.938939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:17.938997  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:17.964896  546345 cri.go:89] found id: ""
	I1202 22:32:17.964960  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.964983  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:17.964997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:17.965076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:17.988828  546345 cri.go:89] found id: ""
	I1202 22:32:17.988863  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.988872  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:17.988882  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:17.988893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:18.022032  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:18.022059  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:18.077598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:18.077635  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:18.095143  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:18.095184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:18.157395  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:18.157426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:18.157439  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:20.681946  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:20.692713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:20.692790  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:20.716255  546345 cri.go:89] found id: ""
	I1202 22:32:20.716281  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.716290  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:20.716297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:20.716355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:20.743603  546345 cri.go:89] found id: ""
	I1202 22:32:20.743629  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.743638  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:20.743645  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:20.743705  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:20.768770  546345 cri.go:89] found id: ""
	I1202 22:32:20.768798  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.768807  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:20.768814  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:20.768878  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:20.805921  546345 cri.go:89] found id: ""
	I1202 22:32:20.805945  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.805954  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:20.805960  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:20.806018  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:20.882456  546345 cri.go:89] found id: ""
	I1202 22:32:20.882478  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.882486  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:20.882493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:20.882548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:20.906709  546345 cri.go:89] found id: ""
	I1202 22:32:20.906732  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.906740  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:20.906747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:20.906803  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:20.930871  546345 cri.go:89] found id: ""
	I1202 22:32:20.930947  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.930970  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:20.930985  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:20.931072  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:20.954799  546345 cri.go:89] found id: ""
	I1202 22:32:20.954823  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.954832  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:20.954841  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:20.954853  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:20.982221  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:20.982253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:21.038726  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:21.038763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:21.054186  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:21.054213  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:21.118780  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:21.118846  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:21.118868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.643583  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:23.655825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:23.655896  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:23.680044  546345 cri.go:89] found id: ""
	I1202 22:32:23.680070  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.680079  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:23.680085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:23.680143  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:23.708984  546345 cri.go:89] found id: ""
	I1202 22:32:23.709009  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.709017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:23.709024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:23.709082  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:23.734044  546345 cri.go:89] found id: ""
	I1202 22:32:23.734068  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.734076  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:23.734082  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:23.734142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:23.763083  546345 cri.go:89] found id: ""
	I1202 22:32:23.763110  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.763118  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:23.763125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:23.763183  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:23.809231  546345 cri.go:89] found id: ""
	I1202 22:32:23.809254  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.809262  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:23.809269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:23.809328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:23.877561  546345 cri.go:89] found id: ""
	I1202 22:32:23.877585  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.877593  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:23.877600  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:23.877685  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:23.900843  546345 cri.go:89] found id: ""
	I1202 22:32:23.900870  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.900879  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:23.900885  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:23.900948  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:23.926458  546345 cri.go:89] found id: ""
	I1202 22:32:23.926497  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.926506  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:23.926515  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:23.926526  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.951259  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:23.951296  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:23.979352  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:23.979421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:24.036927  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:24.036965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:24.052889  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:24.052925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:24.114973  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.615216  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:26.625455  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:26.625533  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:26.651390  546345 cri.go:89] found id: ""
	I1202 22:32:26.651423  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.651432  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:26.651439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:26.651508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:26.677027  546345 cri.go:89] found id: ""
	I1202 22:32:26.677052  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.677060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:26.677067  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:26.677127  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:26.706368  546345 cri.go:89] found id: ""
	I1202 22:32:26.706391  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.706400  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:26.706406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:26.706469  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:26.730421  546345 cri.go:89] found id: ""
	I1202 22:32:26.730445  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.730453  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:26.730460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:26.730525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:26.754523  546345 cri.go:89] found id: ""
	I1202 22:32:26.754552  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.754561  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:26.754569  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:26.754633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:26.779516  546345 cri.go:89] found id: ""
	I1202 22:32:26.779545  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.779554  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:26.779568  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:26.779632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:26.823212  546345 cri.go:89] found id: ""
	I1202 22:32:26.823237  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.823246  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:26.823253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:26.823313  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:26.858245  546345 cri.go:89] found id: ""
	I1202 22:32:26.858282  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.858291  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:26.858300  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:26.858313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:26.917465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:26.917500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:26.933252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:26.933281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:26.995404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.995426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:26.995438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:27.021457  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:27.021490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.552148  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:29.562514  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:29.562594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:29.587012  546345 cri.go:89] found id: ""
	I1202 22:32:29.587037  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.587046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:29.587079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:29.587163  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:29.613219  546345 cri.go:89] found id: ""
	I1202 22:32:29.613246  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.613254  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:29.613261  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:29.613321  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:29.638585  546345 cri.go:89] found id: ""
	I1202 22:32:29.638611  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.638619  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:29.638626  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:29.638682  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:29.663132  546345 cri.go:89] found id: ""
	I1202 22:32:29.663208  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.663225  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:29.663232  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:29.663304  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:29.686925  546345 cri.go:89] found id: ""
	I1202 22:32:29.686947  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.686955  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:29.686961  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:29.687021  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:29.711947  546345 cri.go:89] found id: ""
	I1202 22:32:29.711971  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.711979  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:29.711986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:29.712047  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:29.735873  546345 cri.go:89] found id: ""
	I1202 22:32:29.735940  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.735962  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:29.735988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:29.736071  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:29.764629  546345 cri.go:89] found id: ""
	I1202 22:32:29.764655  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.764664  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:29.764674  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:29.764685  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:29.789251  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:29.789289  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.859060  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:29.859085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:29.927618  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:29.927653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:29.944397  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:29.944477  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:30.015300  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.515559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:32.525887  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:32.525957  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:32.549813  546345 cri.go:89] found id: ""
	I1202 22:32:32.549848  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.549857  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:32.549865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:32.549931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:32.575230  546345 cri.go:89] found id: ""
	I1202 22:32:32.575253  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.575261  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:32.575268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:32.575359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:32.600349  546345 cri.go:89] found id: ""
	I1202 22:32:32.600374  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.600382  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:32.600389  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:32.600448  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:32.629053  546345 cri.go:89] found id: ""
	I1202 22:32:32.629078  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.629086  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:32.629095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:32.629152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:32.653727  546345 cri.go:89] found id: ""
	I1202 22:32:32.653750  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.653759  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:32.653766  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:32.653824  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:32.677981  546345 cri.go:89] found id: ""
	I1202 22:32:32.678019  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.678028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:32.678035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:32.678101  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:32.702199  546345 cri.go:89] found id: ""
	I1202 22:32:32.702222  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.702230  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:32.702237  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:32.702294  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:32.725924  546345 cri.go:89] found id: ""
	I1202 22:32:32.725957  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.725967  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:32.725976  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:32.726002  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:32.779589  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:32.779623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:32.807508  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:32.807541  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:32.902366  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.902386  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:32.902399  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:32.925648  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:32.925948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:35.456822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:35.467636  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:35.467796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:35.496302  546345 cri.go:89] found id: ""
	I1202 22:32:35.496328  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.496337  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:35.496343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:35.496407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:35.525080  546345 cri.go:89] found id: ""
	I1202 22:32:35.525107  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.525116  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:35.525122  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:35.525187  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:35.549407  546345 cri.go:89] found id: ""
	I1202 22:32:35.549432  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.549441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:35.549447  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:35.549505  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:35.574018  546345 cri.go:89] found id: ""
	I1202 22:32:35.574040  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.574049  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:35.574056  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:35.574115  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:35.604104  546345 cri.go:89] found id: ""
	I1202 22:32:35.604128  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.604137  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:35.604143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:35.604201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:35.629312  546345 cri.go:89] found id: ""
	I1202 22:32:35.629346  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.629355  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:35.629361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:35.629427  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:35.653959  546345 cri.go:89] found id: ""
	I1202 22:32:35.653987  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.653996  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:35.654003  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:35.654064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:35.678225  546345 cri.go:89] found id: ""
	I1202 22:32:35.678301  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.678325  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:35.678343  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:35.678368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:35.733851  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:35.733884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:35.749526  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:35.749554  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:35.844900  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:35.844925  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:35.844940  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:35.882135  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:35.882168  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:38.412949  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:38.423327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:38.423399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:38.447069  546345 cri.go:89] found id: ""
	I1202 22:32:38.447097  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.447107  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:38.447148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:38.447205  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:38.473526  546345 cri.go:89] found id: ""
	I1202 22:32:38.473549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.473558  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:38.473565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:38.473626  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:38.501943  546345 cri.go:89] found id: ""
	I1202 22:32:38.501974  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.501984  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:38.501990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:38.502049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:38.526634  546345 cri.go:89] found id: ""
	I1202 22:32:38.526657  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.526666  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:38.526672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:38.526730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:38.555523  546345 cri.go:89] found id: ""
	I1202 22:32:38.555549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.555558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:38.555564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:38.555622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:38.579778  546345 cri.go:89] found id: ""
	I1202 22:32:38.579804  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.579812  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:38.579819  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:38.579875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:38.605528  546345 cri.go:89] found id: ""
	I1202 22:32:38.605589  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.605613  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:38.605633  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:38.605733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:38.629391  546345 cri.go:89] found id: ""
	I1202 22:32:38.629412  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.629421  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:38.629429  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:38.629441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:38.684729  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:38.684763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:38.699841  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:38.699916  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:38.767359  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:38.767378  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:38.767391  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:38.792073  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:38.792104  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.385000  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:41.395673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:41.395741  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:41.420534  546345 cri.go:89] found id: ""
	I1202 22:32:41.420574  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.420586  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:41.420593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:41.420652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:41.445534  546345 cri.go:89] found id: ""
	I1202 22:32:41.445559  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.445567  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:41.445573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:41.445635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:41.470438  546345 cri.go:89] found id: ""
	I1202 22:32:41.470463  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.470473  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:41.470481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:41.470551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:41.495013  546345 cri.go:89] found id: ""
	I1202 22:32:41.495037  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.495045  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:41.495052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:41.495139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:41.520340  546345 cri.go:89] found id: ""
	I1202 22:32:41.520375  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.520385  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:41.520392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:41.520488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:41.545599  546345 cri.go:89] found id: ""
	I1202 22:32:41.545633  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.545642  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:41.545649  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:41.545753  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:41.570203  546345 cri.go:89] found id: ""
	I1202 22:32:41.570227  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.570235  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:41.570241  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:41.570317  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:41.595416  546345 cri.go:89] found id: ""
	I1202 22:32:41.595442  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.595451  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:41.595461  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:41.595493  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.622428  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:41.622456  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:41.678602  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:41.678634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:41.694624  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:41.694654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:41.757051  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:41.757072  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:41.757085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.281854  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:44.292430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:44.292510  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:44.317241  546345 cri.go:89] found id: ""
	I1202 22:32:44.317271  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.317279  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:44.317286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:44.317350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:44.341824  546345 cri.go:89] found id: ""
	I1202 22:32:44.341849  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.341857  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:44.341865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:44.341926  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:44.366036  546345 cri.go:89] found id: ""
	I1202 22:32:44.366061  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.366070  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:44.366077  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:44.366139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:44.391175  546345 cri.go:89] found id: ""
	I1202 22:32:44.391200  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.391209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:44.391216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:44.391292  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:44.420090  546345 cri.go:89] found id: ""
	I1202 22:32:44.420123  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.420132  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:44.420155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:44.420234  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:44.444490  546345 cri.go:89] found id: ""
	I1202 22:32:44.444540  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.444549  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:44.444557  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:44.444612  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:44.470392  546345 cri.go:89] found id: ""
	I1202 22:32:44.470419  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.470427  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:44.470434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:44.470493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:44.495601  546345 cri.go:89] found id: ""
	I1202 22:32:44.495624  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.495633  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:44.495664  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:44.495690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:44.549795  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:44.549886  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:44.567082  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:44.567110  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:44.632540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:44.632570  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:44.632582  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.657144  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:44.657180  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.185793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.196271  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:47.196339  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:47.226550  546345 cri.go:89] found id: ""
	I1202 22:32:47.226572  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.226581  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:47.226588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:47.226645  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:47.250706  546345 cri.go:89] found id: ""
	I1202 22:32:47.250732  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.250741  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:47.250748  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:47.250811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:47.280047  546345 cri.go:89] found id: ""
	I1202 22:32:47.280072  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.280081  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:47.280088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:47.280154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:47.306607  546345 cri.go:89] found id: ""
	I1202 22:32:47.306633  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.306642  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:47.306651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:47.306718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:47.330953  546345 cri.go:89] found id: ""
	I1202 22:32:47.331024  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.331038  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:47.331045  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:47.331105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:47.360182  546345 cri.go:89] found id: ""
	I1202 22:32:47.360206  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.360215  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:47.360222  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:47.360293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:47.388010  546345 cri.go:89] found id: ""
	I1202 22:32:47.388032  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.388041  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:47.388048  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:47.388114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:47.415262  546345 cri.go:89] found id: ""
	I1202 22:32:47.415294  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.415303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:47.415312  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:47.415326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:47.433260  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:47.433288  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:47.497337  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:47.497366  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:47.497378  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:47.521722  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:47.521801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.548995  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:47.549027  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:50.107291  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.119155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:50.119230  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:50.144228  546345 cri.go:89] found id: ""
	I1202 22:32:50.144252  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.144261  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:50.144268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:50.144329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:50.172928  546345 cri.go:89] found id: ""
	I1202 22:32:50.172951  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.172959  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:50.172966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:50.173027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:50.201752  546345 cri.go:89] found id: ""
	I1202 22:32:50.201795  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.201804  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:50.201811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:50.201873  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:50.225118  546345 cri.go:89] found id: ""
	I1202 22:32:50.225139  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.225148  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:50.225154  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:50.225217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:50.251396  546345 cri.go:89] found id: ""
	I1202 22:32:50.251421  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.251430  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:50.251437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:50.251495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:50.278859  546345 cri.go:89] found id: ""
	I1202 22:32:50.278887  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.278896  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:50.278903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:50.278961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:50.302858  546345 cri.go:89] found id: ""
	I1202 22:32:50.302891  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.302900  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:50.302907  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:50.302972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:50.330618  546345 cri.go:89] found id: ""
	I1202 22:32:50.330642  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.330650  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:50.330659  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:50.330670  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:50.347121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:50.347147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:50.414460  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:50.414482  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:50.414496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:50.438651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:50.438682  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:50.466506  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:50.466532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.022126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.032606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:53.032678  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:53.068046  546345 cri.go:89] found id: ""
	I1202 22:32:53.068078  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.068088  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:53.068095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:53.068154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:53.130393  546345 cri.go:89] found id: ""
	I1202 22:32:53.130414  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.130423  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:53.130429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:53.130488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:53.156458  546345 cri.go:89] found id: ""
	I1202 22:32:53.156481  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.156498  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:53.156504  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:53.156564  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:53.180994  546345 cri.go:89] found id: ""
	I1202 22:32:53.181067  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.181090  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:53.181110  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:53.181196  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:53.204951  546345 cri.go:89] found id: ""
	I1202 22:32:53.204976  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.204985  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:53.204993  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:53.205053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:53.232863  546345 cri.go:89] found id: ""
	I1202 22:32:53.232896  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.232905  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:53.232912  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:53.232981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:53.263355  546345 cri.go:89] found id: ""
	I1202 22:32:53.263381  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.263390  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:53.263396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:53.263454  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:53.288048  546345 cri.go:89] found id: ""
	I1202 22:32:53.288074  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.288082  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:53.288092  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:53.288103  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.343380  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:53.343416  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:53.359279  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:53.359304  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:53.426667  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:53.426690  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:53.426703  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:53.451602  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:53.451640  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:55.979195  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.989644  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:55.989738  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:56.016824  546345 cri.go:89] found id: ""
	I1202 22:32:56.016857  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.016866  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:56.016873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:56.016939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:56.061785  546345 cri.go:89] found id: ""
	I1202 22:32:56.061833  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.061846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:56.061854  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:56.061938  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:56.099312  546345 cri.go:89] found id: ""
	I1202 22:32:56.099341  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.099351  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:56.099359  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:56.099422  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:56.133179  546345 cri.go:89] found id: ""
	I1202 22:32:56.133209  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.133217  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:56.133224  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:56.133285  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:56.166397  546345 cri.go:89] found id: ""
	I1202 22:32:56.166420  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.166429  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:56.166435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:56.166493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:56.191240  546345 cri.go:89] found id: ""
	I1202 22:32:56.191300  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.191323  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:56.191343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:56.191406  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:56.219940  546345 cri.go:89] found id: ""
	I1202 22:32:56.219966  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.219975  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:56.219982  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:56.220042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:56.245089  546345 cri.go:89] found id: ""
	I1202 22:32:56.245116  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.245125  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:56.245134  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:56.245145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:56.275969  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:56.275995  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:56.330353  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:56.330388  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:56.346262  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:56.346293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:56.411285  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:56.411307  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:56.411320  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:58.937516  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.947690  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:58.947760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:58.971175  546345 cri.go:89] found id: ""
	I1202 22:32:58.971209  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.971221  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:58.971229  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:58.971289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:58.995437  546345 cri.go:89] found id: ""
	I1202 22:32:58.995465  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.995474  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:58.995481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:58.995538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:59.021289  546345 cri.go:89] found id: ""
	I1202 22:32:59.021315  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.021323  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:59.021329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:59.021388  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:59.067648  546345 cri.go:89] found id: ""
	I1202 22:32:59.067676  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.067684  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:59.067691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:59.067752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:59.120318  546345 cri.go:89] found id: ""
	I1202 22:32:59.120353  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.120362  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:59.120369  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:59.120435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:59.147811  546345 cri.go:89] found id: ""
	I1202 22:32:59.147845  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.147855  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:59.147862  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:59.147929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:59.176414  546345 cri.go:89] found id: ""
	I1202 22:32:59.176448  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.176456  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:59.176463  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:59.176534  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:59.202001  546345 cri.go:89] found id: ""
	I1202 22:32:59.202027  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.202035  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:59.202045  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:59.202056  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:59.257545  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:59.257581  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:59.273305  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:59.273385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:59.335480  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:59.335501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:59.335514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:59.359981  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:59.360017  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:01.886549  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.897148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:01.897222  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:01.923194  546345 cri.go:89] found id: ""
	I1202 22:33:01.923220  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.923229  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:01.923236  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:01.923295  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:01.947898  546345 cri.go:89] found id: ""
	I1202 22:33:01.947922  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.947930  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:01.947937  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:01.947996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:01.977128  546345 cri.go:89] found id: ""
	I1202 22:33:01.977153  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.977161  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:01.977167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:01.977226  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:02.004541  546345 cri.go:89] found id: ""
	I1202 22:33:02.004569  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.004578  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:02.004586  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:02.004660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:02.034163  546345 cri.go:89] found id: ""
	I1202 22:33:02.034189  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.034199  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:02.034206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:02.034302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:02.103583  546345 cri.go:89] found id: ""
	I1202 22:33:02.103619  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.103628  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:02.103651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:02.103732  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:02.141546  546345 cri.go:89] found id: ""
	I1202 22:33:02.141581  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.141590  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:02.141597  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:02.141672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:02.166780  546345 cri.go:89] found id: ""
	I1202 22:33:02.166805  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.166815  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:02.166824  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:02.166835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:02.191150  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:02.191186  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:02.222079  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:02.222108  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:02.279420  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:02.279453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:02.295466  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:02.295494  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:02.371035  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:04.872723  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.882988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:04.883064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:04.906907  546345 cri.go:89] found id: ""
	I1202 22:33:04.906931  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.906940  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:04.906947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:04.907006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:04.931077  546345 cri.go:89] found id: ""
	I1202 22:33:04.931102  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.931111  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:04.931119  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:04.931176  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:04.954232  546345 cri.go:89] found id: ""
	I1202 22:33:04.954258  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.954266  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:04.954273  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:04.954332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:04.978316  546345 cri.go:89] found id: ""
	I1202 22:33:04.978339  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.978347  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:04.978354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:04.978412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:05.008227  546345 cri.go:89] found id: ""
	I1202 22:33:05.008253  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.008261  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:05.008269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:05.008401  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:05.037911  546345 cri.go:89] found id: ""
	I1202 22:33:05.037948  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.037957  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:05.037964  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:05.038041  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:05.115835  546345 cri.go:89] found id: ""
	I1202 22:33:05.115860  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.115869  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:05.115876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:05.115944  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:05.142576  546345 cri.go:89] found id: ""
	I1202 22:33:05.142599  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.142608  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:05.142617  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:05.142628  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:05.172774  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:05.172802  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:05.229451  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:05.229486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:05.245158  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:05.245184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:05.308964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:05.308985  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:05.309000  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:07.834473  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.845693  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:07.845780  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:07.870140  546345 cri.go:89] found id: ""
	I1202 22:33:07.870162  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.870171  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:07.870178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:07.870238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:07.894539  546345 cri.go:89] found id: ""
	I1202 22:33:07.894562  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.894570  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:07.894583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:07.894640  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:07.918644  546345 cri.go:89] found id: ""
	I1202 22:33:07.918672  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.918681  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:07.918688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:07.918751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:07.942273  546345 cri.go:89] found id: ""
	I1202 22:33:07.942296  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.942304  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:07.942310  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:07.942367  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:07.965678  546345 cri.go:89] found id: ""
	I1202 22:33:07.965703  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.965712  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:07.965718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:07.965775  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:07.989455  546345 cri.go:89] found id: ""
	I1202 22:33:07.989480  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.989489  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:07.989496  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:07.989556  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:08.015583  546345 cri.go:89] found id: ""
	I1202 22:33:08.015608  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.015617  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:08.015624  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:08.015686  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:08.068697  546345 cri.go:89] found id: ""
	I1202 22:33:08.068724  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.068734  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:08.068745  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:08.068768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:08.112700  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:08.112750  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:08.148124  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:08.148159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:08.208343  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:08.208384  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:08.224299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:08.224331  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:08.287847  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:10.788102  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.798373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:10.798493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:10.826690  546345 cri.go:89] found id: ""
	I1202 22:33:10.826715  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.826724  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:10.826731  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:10.826791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:10.857739  546345 cri.go:89] found id: ""
	I1202 22:33:10.857765  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.857773  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:10.857780  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:10.857841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:10.886900  546345 cri.go:89] found id: ""
	I1202 22:33:10.886926  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.886935  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:10.886942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:10.887001  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:10.915788  546345 cri.go:89] found id: ""
	I1202 22:33:10.915811  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.915820  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:10.915826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:10.915883  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:10.940846  546345 cri.go:89] found id: ""
	I1202 22:33:10.940869  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.940877  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:10.940883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:10.940942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:10.969358  546345 cri.go:89] found id: ""
	I1202 22:33:10.969380  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.969389  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:10.969396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:10.969452  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:10.994365  546345 cri.go:89] found id: ""
	I1202 22:33:10.994389  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.994398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:10.994405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:10.994488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:11.021354  546345 cri.go:89] found id: ""
	I1202 22:33:11.021376  546345 logs.go:282] 0 containers: []
	W1202 22:33:11.021387  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:11.021396  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:11.021406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:11.096880  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:11.096922  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:11.115249  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:11.115286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:11.192270  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:11.192290  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:11.192305  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:11.216801  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:11.216838  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:13.747802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.758663  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:13.758739  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:13.784138  546345 cri.go:89] found id: ""
	I1202 22:33:13.784160  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.784169  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:13.784175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:13.784242  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:13.810746  546345 cri.go:89] found id: ""
	I1202 22:33:13.810768  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.810777  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:13.810783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:13.810841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:13.834531  546345 cri.go:89] found id: ""
	I1202 22:33:13.834563  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.834571  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:13.834578  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:13.834644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:13.858698  546345 cri.go:89] found id: ""
	I1202 22:33:13.858721  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.858729  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:13.858736  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:13.858798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:13.882726  546345 cri.go:89] found id: ""
	I1202 22:33:13.882749  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.882757  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:13.882764  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:13.882822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:13.908263  546345 cri.go:89] found id: ""
	I1202 22:33:13.908287  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.908296  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:13.908302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:13.908359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:13.933266  546345 cri.go:89] found id: ""
	I1202 22:33:13.933290  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.933298  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:13.933304  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:13.933361  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:13.957668  546345 cri.go:89] found id: ""
	I1202 22:33:13.957738  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.957753  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:13.957764  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:13.957776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:13.983158  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:13.983193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:14.013404  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:14.013434  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:14.076941  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:14.076982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:14.122673  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:14.122701  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:14.186208  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:16.686471  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.697167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:16.697255  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:16.721334  546345 cri.go:89] found id: ""
	I1202 22:33:16.721358  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.721367  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:16.721374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:16.721439  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:16.744849  546345 cri.go:89] found id: ""
	I1202 22:33:16.744875  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.744887  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:16.744893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:16.744950  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:16.768289  546345 cri.go:89] found id: ""
	I1202 22:33:16.768315  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.768324  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:16.768330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:16.768390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:16.793721  546345 cri.go:89] found id: ""
	I1202 22:33:16.793745  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.793754  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:16.793761  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:16.793822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:16.819397  546345 cri.go:89] found id: ""
	I1202 22:33:16.819419  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.819427  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:16.819434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:16.819493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:16.847655  546345 cri.go:89] found id: ""
	I1202 22:33:16.847682  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.847691  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:16.847699  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:16.847779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:16.872502  546345 cri.go:89] found id: ""
	I1202 22:33:16.872527  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.872535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:16.872542  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:16.872605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:16.904922  546345 cri.go:89] found id: ""
	I1202 22:33:16.904953  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.904968  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:16.904978  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:16.904990  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:16.929494  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:16.929529  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:16.960812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:16.960840  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:17.015332  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:17.015369  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:17.031163  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:17.031192  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:17.146404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.646668  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.656904  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:19.656972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:19.681366  546345 cri.go:89] found id: ""
	I1202 22:33:19.681390  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.681397  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:19.681404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:19.681462  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:19.705682  546345 cri.go:89] found id: ""
	I1202 22:33:19.705711  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.705720  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:19.705726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:19.705782  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:19.728889  546345 cri.go:89] found id: ""
	I1202 22:33:19.728913  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.728921  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:19.728928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:19.728986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:19.753177  546345 cri.go:89] found id: ""
	I1202 22:33:19.753200  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.753209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:19.753215  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:19.753275  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:19.777064  546345 cri.go:89] found id: ""
	I1202 22:33:19.777087  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.777095  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:19.777101  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:19.777165  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:19.804440  546345 cri.go:89] found id: ""
	I1202 22:33:19.804462  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.804479  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:19.804487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:19.804544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:19.831370  546345 cri.go:89] found id: ""
	I1202 22:33:19.831395  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.831403  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:19.831409  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:19.831470  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:19.854457  546345 cri.go:89] found id: ""
	I1202 22:33:19.854481  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.854489  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:19.854498  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:19.854512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:19.912020  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:19.912055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:19.927521  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:19.927549  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:19.988124  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.988188  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:19.988211  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:20.013304  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:20.013341  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:22.562705  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.573519  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:22.573597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:22.603465  546345 cri.go:89] found id: ""
	I1202 22:33:22.603541  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.603556  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:22.603564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:22.603670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:22.629949  546345 cri.go:89] found id: ""
	I1202 22:33:22.629976  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.629985  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:22.629991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:22.630051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:22.660760  546345 cri.go:89] found id: ""
	I1202 22:33:22.660785  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.660794  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:22.660801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:22.660861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:22.685501  546345 cri.go:89] found id: ""
	I1202 22:33:22.685531  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.685540  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:22.685555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:22.685618  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:22.712679  546345 cri.go:89] found id: ""
	I1202 22:33:22.712714  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.712723  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:22.712730  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:22.712799  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:22.738275  546345 cri.go:89] found id: ""
	I1202 22:33:22.738301  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.738310  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:22.738317  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:22.738437  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:22.767652  546345 cri.go:89] found id: ""
	I1202 22:33:22.767677  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.767686  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:22.767694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:22.767756  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:22.793810  546345 cri.go:89] found id: ""
	I1202 22:33:22.793836  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.793845  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:22.793854  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:22.793866  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:22.856577  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:22.856615  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:22.872185  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:22.872221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:22.937005  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:22.937039  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:22.937052  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:22.961706  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:22.961743  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.491815  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.502275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:25.502392  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:25.526647  546345 cri.go:89] found id: ""
	I1202 22:33:25.526680  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.526688  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:25.526695  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:25.526767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:25.554949  546345 cri.go:89] found id: ""
	I1202 22:33:25.554970  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.554980  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:25.554986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:25.555043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:25.578929  546345 cri.go:89] found id: ""
	I1202 22:33:25.578953  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.578962  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:25.578968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:25.579044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:25.608022  546345 cri.go:89] found id: ""
	I1202 22:33:25.608056  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.608065  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:25.608088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:25.608169  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:25.636085  546345 cri.go:89] found id: ""
	I1202 22:33:25.636120  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.636130  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:25.636153  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:25.636235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:25.666823  546345 cri.go:89] found id: ""
	I1202 22:33:25.666856  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.666865  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:25.666873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:25.666942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:25.690601  546345 cri.go:89] found id: ""
	I1202 22:33:25.690635  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.690645  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:25.690652  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:25.690723  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:25.719343  546345 cri.go:89] found id: ""
	I1202 22:33:25.719379  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.719388  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:25.719396  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:25.719408  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:25.743724  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:25.743768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.771761  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:25.771786  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:25.828678  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:25.828713  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:25.844300  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:25.844332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:25.908308  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.409045  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.420392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:28.420486  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:28.451663  546345 cri.go:89] found id: ""
	I1202 22:33:28.451687  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.451696  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:28.451704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:28.451770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:28.480763  546345 cri.go:89] found id: ""
	I1202 22:33:28.480788  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.480797  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:28.480804  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:28.480888  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:28.505757  546345 cri.go:89] found id: ""
	I1202 22:33:28.505781  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.505789  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:28.505796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:28.505882  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:28.530092  546345 cri.go:89] found id: ""
	I1202 22:33:28.530124  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.530134  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:28.530141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:28.530202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:28.555441  546345 cri.go:89] found id: ""
	I1202 22:33:28.555468  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.555477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:28.555484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:28.555542  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:28.588393  546345 cri.go:89] found id: ""
	I1202 22:33:28.588414  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.588422  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:28.588429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:28.588498  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:28.615564  546345 cri.go:89] found id: ""
	I1202 22:33:28.615586  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.615595  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:28.615602  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:28.615663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:28.640294  546345 cri.go:89] found id: ""
	I1202 22:33:28.640316  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.640324  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:28.640333  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:28.640344  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:28.670446  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:28.670473  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:28.731540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:28.731583  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:28.747338  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:28.747365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:28.807964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.807987  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:28.808001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.332523  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.349889  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:31.349961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:31.381168  546345 cri.go:89] found id: ""
	I1202 22:33:31.381196  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.381204  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:31.381211  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:31.381274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:31.408915  546345 cri.go:89] found id: ""
	I1202 22:33:31.408947  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.408956  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:31.408963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:31.409025  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:31.433408  546345 cri.go:89] found id: ""
	I1202 22:33:31.433433  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.433441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:31.433448  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:31.433506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:31.457935  546345 cri.go:89] found id: ""
	I1202 22:33:31.457968  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.457976  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:31.457983  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:31.458053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:31.481621  546345 cri.go:89] found id: ""
	I1202 22:33:31.481694  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.481704  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:31.481711  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:31.481781  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:31.505764  546345 cri.go:89] found id: ""
	I1202 22:33:31.505789  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.505799  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:31.505805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:31.505864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:31.530522  546345 cri.go:89] found id: ""
	I1202 22:33:31.530557  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.530565  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:31.530572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:31.530639  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:31.558641  546345 cri.go:89] found id: ""
	I1202 22:33:31.558706  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.558720  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:31.558731  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:31.558747  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:31.614675  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:31.614707  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:31.630252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:31.630279  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:31.695335  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:31.695359  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:31.695372  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.719979  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:31.720013  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:34.252356  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.264856  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:34.264924  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:34.303387  546345 cri.go:89] found id: ""
	I1202 22:33:34.303422  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.303437  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:34.303445  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:34.303502  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:34.377615  546345 cri.go:89] found id: ""
	I1202 22:33:34.377643  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.377665  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:34.377673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:34.377750  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:34.409336  546345 cri.go:89] found id: ""
	I1202 22:33:34.409359  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.409367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:34.409374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:34.409433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:34.434153  546345 cri.go:89] found id: ""
	I1202 22:33:34.434175  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.434184  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:34.434190  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:34.434250  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:34.459524  546345 cri.go:89] found id: ""
	I1202 22:33:34.459549  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.459558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:34.459565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:34.459622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:34.487835  546345 cri.go:89] found id: ""
	I1202 22:33:34.487862  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.487871  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:34.487878  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:34.487939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:34.511616  546345 cri.go:89] found id: ""
	I1202 22:33:34.511638  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.511647  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:34.511654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:34.511712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:34.539284  546345 cri.go:89] found id: ""
	I1202 22:33:34.539307  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.539315  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:34.539324  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:34.539335  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:34.594370  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:34.594404  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:34.610176  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:34.610203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:34.674945  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:34.674968  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:34.674980  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:34.699820  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:34.699855  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:37.235245  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.245512  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:37.245580  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:37.270720  546345 cri.go:89] found id: ""
	I1202 22:33:37.270743  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.270751  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:37.270757  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:37.270818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:37.317208  546345 cri.go:89] found id: ""
	I1202 22:33:37.317236  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.317244  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:37.317250  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:37.317357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:37.381241  546345 cri.go:89] found id: ""
	I1202 22:33:37.381304  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.381319  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:37.381331  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:37.381391  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:37.406579  546345 cri.go:89] found id: ""
	I1202 22:33:37.406604  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.406613  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:37.406620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:37.406676  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:37.431035  546345 cri.go:89] found id: ""
	I1202 22:33:37.431061  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.431071  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:37.431078  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:37.431170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:37.455450  546345 cri.go:89] found id: ""
	I1202 22:33:37.455476  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.455485  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:37.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:37.455549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:37.479696  546345 cri.go:89] found id: ""
	I1202 22:33:37.479763  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.479784  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:37.479791  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:37.479864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:37.504424  546345 cri.go:89] found id: ""
	I1202 22:33:37.504449  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.504465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:37.504475  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:37.504486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:37.562929  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:37.562965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:37.578720  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:37.578749  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:37.643738  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:37.643758  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:37.643770  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:37.669355  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:37.669389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.197629  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.209725  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:40.209798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:40.235226  546345 cri.go:89] found id: ""
	I1202 22:33:40.235249  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.235258  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:40.235265  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:40.235323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:40.264913  546345 cri.go:89] found id: ""
	I1202 22:33:40.264938  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.264948  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:40.264955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:40.265014  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:40.292266  546345 cri.go:89] found id: ""
	I1202 22:33:40.292293  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.292302  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:40.292309  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:40.292366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:40.328677  546345 cri.go:89] found id: ""
	I1202 22:33:40.328703  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.328712  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:40.328718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:40.328779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:40.372520  546345 cri.go:89] found id: ""
	I1202 22:33:40.372553  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.372562  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:40.372570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:40.372637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:40.401860  546345 cri.go:89] found id: ""
	I1202 22:33:40.401896  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.401906  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:40.401913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:40.401981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:40.426706  546345 cri.go:89] found id: ""
	I1202 22:33:40.426774  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.426790  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:40.426797  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:40.426871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:40.450845  546345 cri.go:89] found id: ""
	I1202 22:33:40.450873  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.450882  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:40.450892  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:40.450921  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:40.466330  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:40.466359  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:40.530421  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:40.530440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:40.530471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:40.557935  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:40.557971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.589359  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:40.589413  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.149757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.160459  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:43.160531  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:43.185860  546345 cri.go:89] found id: ""
	I1202 22:33:43.185885  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.185893  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:43.185900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:43.185959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:43.213745  546345 cri.go:89] found id: ""
	I1202 22:33:43.213771  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.213782  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:43.213788  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:43.213845  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:43.238763  546345 cri.go:89] found id: ""
	I1202 22:33:43.238788  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.238796  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:43.238805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:43.238865  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:43.263259  546345 cri.go:89] found id: ""
	I1202 22:33:43.263285  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.263294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:43.263301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:43.263362  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:43.287780  546345 cri.go:89] found id: ""
	I1202 22:33:43.287804  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.287812  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:43.287818  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:43.287901  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:43.333797  546345 cri.go:89] found id: ""
	I1202 22:33:43.333819  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.333827  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:43.333833  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:43.333891  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:43.379712  546345 cri.go:89] found id: ""
	I1202 22:33:43.379734  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.379743  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:43.379749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:43.379808  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:43.415160  546345 cri.go:89] found id: ""
	I1202 22:33:43.415240  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.415264  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:43.415282  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:43.415306  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:43.442448  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:43.442475  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.497169  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:43.497207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:43.513334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:43.513370  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:43.577650  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:43.577691  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:43.577704  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.104276  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:46.114696  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:46.114770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:46.143775  546345 cri.go:89] found id: ""
	I1202 22:33:46.143798  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.143806  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:46.143813  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:46.143872  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:46.168484  546345 cri.go:89] found id: ""
	I1202 22:33:46.168508  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.168517  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:46.168527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:46.168585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:46.195213  546345 cri.go:89] found id: ""
	I1202 22:33:46.195236  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.195244  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:46.195251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:46.195316  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:46.218803  546345 cri.go:89] found id: ""
	I1202 22:33:46.218825  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.218833  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:46.218840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:46.218902  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:46.242627  546345 cri.go:89] found id: ""
	I1202 22:33:46.242649  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.242657  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:46.242664  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:46.242735  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:46.268270  546345 cri.go:89] found id: ""
	I1202 22:33:46.268299  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.268314  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:46.268322  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:46.268398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:46.303449  546345 cri.go:89] found id: ""
	I1202 22:33:46.303476  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.303484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:46.303491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:46.303547  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:46.355851  546345 cri.go:89] found id: ""
	I1202 22:33:46.355877  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.355886  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:46.355895  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:46.355906  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:46.372396  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:46.372426  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:46.448683  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:46.448707  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:46.448721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.472236  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:46.472269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:46.501830  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:46.501857  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.060676  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:49.071150  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:49.071224  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:49.095927  546345 cri.go:89] found id: ""
	I1202 22:33:49.095949  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.095963  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:49.095970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:49.096027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:49.121814  546345 cri.go:89] found id: ""
	I1202 22:33:49.121837  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.121846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:49.121853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:49.121911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:49.150554  546345 cri.go:89] found id: ""
	I1202 22:33:49.150582  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.150590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:49.150596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:49.150660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:49.174636  546345 cri.go:89] found id: ""
	I1202 22:33:49.174660  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.174668  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:49.174675  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:49.174757  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:49.198993  546345 cri.go:89] found id: ""
	I1202 22:33:49.199019  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.199028  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:49.199035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:49.199122  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:49.237206  546345 cri.go:89] found id: ""
	I1202 22:33:49.237280  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.237304  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:49.237327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:49.237412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:49.262326  546345 cri.go:89] found id: ""
	I1202 22:33:49.262395  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.262418  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:49.262437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:49.262508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:49.287127  546345 cri.go:89] found id: ""
	I1202 22:33:49.287192  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.287215  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:49.287239  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:49.287269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.365279  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:49.365438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:49.383138  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:49.383164  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:49.454034  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:49.454054  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:49.454066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:49.478949  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:49.478982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.007120  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:52.018354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:52.018431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:52.048418  546345 cri.go:89] found id: ""
	I1202 22:33:52.048502  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.048527  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:52.048554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:52.048670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:52.075756  546345 cri.go:89] found id: ""
	I1202 22:33:52.075795  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.075804  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:52.075811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:52.075875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:52.102101  546345 cri.go:89] found id: ""
	I1202 22:33:52.102128  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.102138  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:52.102145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:52.102213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:52.127350  546345 cri.go:89] found id: ""
	I1202 22:33:52.127375  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.127390  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:52.127397  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:52.127461  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:52.152298  546345 cri.go:89] found id: ""
	I1202 22:33:52.152325  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.152334  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:52.152340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:52.152398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:52.176927  546345 cri.go:89] found id: ""
	I1202 22:33:52.176952  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.176960  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:52.176966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:52.177023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:52.203976  546345 cri.go:89] found id: ""
	I1202 22:33:52.204003  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.204012  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:52.204018  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:52.204077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:52.229381  546345 cri.go:89] found id: ""
	I1202 22:33:52.229408  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.229416  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:52.229425  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:52.229443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:52.292540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:52.292561  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:52.292574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:52.324946  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:52.325102  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.369542  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:52.369568  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:52.436122  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:52.436159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:54.953633  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:54.963990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:54.964062  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:54.991840  546345 cri.go:89] found id: ""
	I1202 22:33:54.991865  546345 logs.go:282] 0 containers: []
	W1202 22:33:54.991873  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:54.991880  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:54.991937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:55.024217  546345 cri.go:89] found id: ""
	I1202 22:33:55.024241  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.024250  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:55.024258  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:55.024320  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:55.048985  546345 cri.go:89] found id: ""
	I1202 22:33:55.049007  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.049015  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:55.049021  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:55.049086  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:55.073787  546345 cri.go:89] found id: ""
	I1202 22:33:55.073809  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.073818  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:55.073825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:55.073887  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:55.097827  546345 cri.go:89] found id: ""
	I1202 22:33:55.097849  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.097857  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:55.097864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:55.097929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:55.127096  546345 cri.go:89] found id: ""
	I1202 22:33:55.127119  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.127127  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:55.127135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:55.127247  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:55.155895  546345 cri.go:89] found id: ""
	I1202 22:33:55.155920  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.155929  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:55.155936  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:55.155998  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:55.184917  546345 cri.go:89] found id: ""
	I1202 22:33:55.184943  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.184951  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:55.184960  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:55.184973  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:55.245409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:55.245430  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:55.245443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:55.269272  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:55.269303  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:55.324186  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:55.324256  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:55.407948  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:55.408021  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:57.927547  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:57.938134  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:57.938208  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:57.966983  546345 cri.go:89] found id: ""
	I1202 22:33:57.967016  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.967025  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:57.967031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:57.967090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:57.990911  546345 cri.go:89] found id: ""
	I1202 22:33:57.990934  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.990942  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:57.990949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:57.991006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:58.027051  546345 cri.go:89] found id: ""
	I1202 22:33:58.027076  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.027085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:58.027091  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:58.027170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:58.052767  546345 cri.go:89] found id: ""
	I1202 22:33:58.052791  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.052801  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:58.052808  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:58.052866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:58.077589  546345 cri.go:89] found id: ""
	I1202 22:33:58.077616  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.077626  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:58.077634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:58.077736  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:58.102352  546345 cri.go:89] found id: ""
	I1202 22:33:58.102377  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.102385  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:58.102394  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:58.102453  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:58.127151  546345 cri.go:89] found id: ""
	I1202 22:33:58.127174  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.127183  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:58.127203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:58.127264  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:58.153068  546345 cri.go:89] found id: ""
	I1202 22:33:58.153097  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.153106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:58.153116  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:58.153128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:58.207341  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:58.207375  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:58.223908  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:58.223993  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:58.303303  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:58.303374  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:58.303401  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:58.339284  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:58.339358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:00.884684  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:00.894955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:00.895043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:00.919607  546345 cri.go:89] found id: ""
	I1202 22:34:00.919638  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.919648  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:00.919655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:00.919714  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:00.943845  546345 cri.go:89] found id: ""
	I1202 22:34:00.943869  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.943877  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:00.943883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:00.943942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:00.969291  546345 cri.go:89] found id: ""
	I1202 22:34:00.969316  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.969325  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:00.969332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:00.969387  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:00.998170  546345 cri.go:89] found id: ""
	I1202 22:34:00.998194  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.998203  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:00.998210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:00.998267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:01.028082  546345 cri.go:89] found id: ""
	I1202 22:34:01.028108  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.028118  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:01.028125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:01.028182  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:01.052163  546345 cri.go:89] found id: ""
	I1202 22:34:01.052190  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.052198  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:01.052204  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:01.052261  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:01.079605  546345 cri.go:89] found id: ""
	I1202 22:34:01.079638  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.079648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:01.079655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:01.079727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:01.104672  546345 cri.go:89] found id: ""
	I1202 22:34:01.104697  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.104705  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:01.104714  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:01.104727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:01.168637  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:01.168689  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:01.186088  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:01.186120  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:01.254373  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:01.254405  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:01.254421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:01.279534  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:01.279570  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.844056  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:03.854485  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:03.854559  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:03.883518  546345 cri.go:89] found id: ""
	I1202 22:34:03.883539  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.883547  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:03.883555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:03.883616  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:03.907609  546345 cri.go:89] found id: ""
	I1202 22:34:03.907634  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.907643  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:03.907650  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:03.907708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:03.931661  546345 cri.go:89] found id: ""
	I1202 22:34:03.931686  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.931694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:03.931701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:03.931762  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:03.956212  546345 cri.go:89] found id: ""
	I1202 22:34:03.956236  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.956245  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:03.956252  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:03.956310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:03.982858  546345 cri.go:89] found id: ""
	I1202 22:34:03.982882  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.982890  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:03.982899  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:03.982955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:04.008609  546345 cri.go:89] found id: ""
	I1202 22:34:04.008637  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.008646  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:04.008654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:04.008718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:04.034395  546345 cri.go:89] found id: ""
	I1202 22:34:04.034426  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.034436  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:04.034443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:04.034503  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:04.059450  546345 cri.go:89] found id: ""
	I1202 22:34:04.059474  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.059482  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:04.059492  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:04.059503  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:04.116204  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:04.116237  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:04.131753  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:04.131779  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:04.195398  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:04.195417  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:04.195431  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:04.220265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:04.220302  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:06.748017  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:06.758416  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:06.758487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:06.786850  546345 cri.go:89] found id: ""
	I1202 22:34:06.786877  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.786886  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:06.786893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:06.786958  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:06.811248  546345 cri.go:89] found id: ""
	I1202 22:34:06.811274  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.811283  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:06.811290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:06.811352  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:06.835885  546345 cri.go:89] found id: ""
	I1202 22:34:06.835911  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.835920  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:06.835927  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:06.835986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:06.861031  546345 cri.go:89] found id: ""
	I1202 22:34:06.861057  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.861066  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:06.861076  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:06.861137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:06.885492  546345 cri.go:89] found id: ""
	I1202 22:34:06.885518  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.885526  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:06.885533  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:06.885621  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:06.911207  546345 cri.go:89] found id: ""
	I1202 22:34:06.911233  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.911242  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:06.911249  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:06.911307  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:06.936761  546345 cri.go:89] found id: ""
	I1202 22:34:06.936786  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.936794  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:06.936801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:06.936858  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:06.961200  546345 cri.go:89] found id: ""
	I1202 22:34:06.961225  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.961233  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:06.961242  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:06.961253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:07.017396  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:07.017432  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:07.033140  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:07.033220  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:07.098724  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:07.098749  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:07.098764  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:07.123278  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:07.123313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:09.654822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:09.666550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:09.666631  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:09.696479  546345 cri.go:89] found id: ""
	I1202 22:34:09.696501  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.696510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:09.696516  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:09.696573  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:09.720695  546345 cri.go:89] found id: ""
	I1202 22:34:09.720717  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.720725  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:09.720732  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:09.720789  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:09.743340  546345 cri.go:89] found id: ""
	I1202 22:34:09.743366  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.743374  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:09.743381  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:09.743441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:09.771827  546345 cri.go:89] found id: ""
	I1202 22:34:09.771851  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.771859  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:09.771866  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:09.771942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:09.800440  546345 cri.go:89] found id: ""
	I1202 22:34:09.800511  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.800522  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:09.800529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:09.800599  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:09.827898  546345 cri.go:89] found id: ""
	I1202 22:34:09.827933  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.827942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:09.827949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:09.828053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:09.851874  546345 cri.go:89] found id: ""
	I1202 22:34:09.851909  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.851918  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:09.851925  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:09.852023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:09.876063  546345 cri.go:89] found id: ""
	I1202 22:34:09.876098  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.876106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:09.876136  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:09.876157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:09.931102  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:09.931140  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:09.947006  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:09.947033  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:10.016167  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:10.016189  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:10.016202  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:10.042713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:10.042746  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.574841  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:12.602704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:12.602776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:12.630259  546345 cri.go:89] found id: ""
	I1202 22:34:12.630283  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.630291  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:12.630298  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:12.630356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:12.653540  546345 cri.go:89] found id: ""
	I1202 22:34:12.653571  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.653580  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:12.653587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:12.653726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:12.678660  546345 cri.go:89] found id: ""
	I1202 22:34:12.678685  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.678694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:12.678701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:12.678761  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:12.702118  546345 cri.go:89] found id: ""
	I1202 22:34:12.702147  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.702155  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:12.702162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:12.702262  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:12.729590  546345 cri.go:89] found id: ""
	I1202 22:34:12.729615  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.729624  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:12.729631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:12.729713  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:12.755560  546345 cri.go:89] found id: ""
	I1202 22:34:12.755586  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.755594  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:12.755601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:12.755656  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:12.788269  546345 cri.go:89] found id: ""
	I1202 22:34:12.788293  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.788302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:12.788308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:12.788366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:12.812214  546345 cri.go:89] found id: ""
	I1202 22:34:12.812239  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.812248  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:12.812257  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:12.812268  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.841941  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:12.841966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:12.896188  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:12.896219  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:12.911694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:12.911721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:12.975342  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:12.975377  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:12.975389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.502887  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:15.513338  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:15.513418  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:15.536875  546345 cri.go:89] found id: ""
	I1202 22:34:15.536897  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.536905  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:15.536911  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:15.536970  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:15.573309  546345 cri.go:89] found id: ""
	I1202 22:34:15.573335  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.573360  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:15.573368  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:15.573433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:15.623126  546345 cri.go:89] found id: ""
	I1202 22:34:15.623149  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.623157  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:15.623164  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:15.623221  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:15.657458  546345 cri.go:89] found id: ""
	I1202 22:34:15.657484  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.657493  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:15.657500  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:15.657568  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:15.681354  546345 cri.go:89] found id: ""
	I1202 22:34:15.681380  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.681389  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:15.681395  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:15.681456  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:15.705775  546345 cri.go:89] found id: ""
	I1202 22:34:15.705848  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.705874  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:15.705894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:15.705971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:15.731425  546345 cri.go:89] found id: ""
	I1202 22:34:15.731448  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.731457  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:15.731464  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:15.731521  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:15.755658  546345 cri.go:89] found id: ""
	I1202 22:34:15.755682  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.755690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:15.755699  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:15.755711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:15.811079  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:15.811113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:15.827246  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:15.827272  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:15.889878  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:15.889899  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:15.889912  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.915317  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:15.915350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.445059  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:18.458773  546345 out.go:203] 
	W1202 22:34:18.461733  546345 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1202 22:34:18.461774  546345 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1202 22:34:18.461784  546345 out.go:285] * Related issues:
	W1202 22:34:18.461797  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1202 22:34:18.461818  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1202 22:34:18.464650  546345 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311616018Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311627086Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311638754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311647836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311661645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311673271Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311695220Z" level=info msg="runtime interface created"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311701619Z" level=info msg="created NRI interface"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311714862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311747001Z" level=info msg="Connect containerd service"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311985262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.313086719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326105435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326173330Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326202376Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326244262Z" level=info msg="Start recovering state"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346481012Z" level=info msg="Start event monitor"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346541925Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346551911Z" level=info msg="Start streaming server"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346565096Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346574146Z" level=info msg="runtime interface starting up..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346580505Z" level=info msg="starting plugins..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346733550Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:28:16 newest-cni-250247 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.348731317Z" level=info msg="containerd successfully booted in 0.056907s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:27.768502   13831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:27.769014   13831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:27.770485   13831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:27.770880   13831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:27.772298   13831 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:34:27 up  4:16,  0 user,  load average: 1.70, 1.01, 1.11
	Linux newest-cni-250247 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:34:23 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:23 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:24 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:25 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:25 newest-cni-250247 kubelet[13677]: E1202 22:34:25.358362   13677 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:25 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:25 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:26 newest-cni-250247 kubelet[13714]: E1202 22:34:26.097378   13714 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:26 newest-cni-250247 kubelet[13734]: E1202 22:34:26.851798   13734 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:26 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:27 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 02 22:34:27 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:27 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:27 newest-cni-250247 kubelet[13785]: E1202 22:34:27.598044   13785 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:27 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:27 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (333.494155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-250247" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250247
helpers_test.go:243: (dbg) docker inspect newest-cni-250247:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	        "Created": "2025-12-02T22:17:45.695373395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 546476,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:28:10.516417593Z",
	            "FinishedAt": "2025-12-02T22:28:08.91957983Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/hosts",
	        "LogPath": "/var/lib/docker/containers/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2/8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2-json.log",
	        "Name": "/newest-cni-250247",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250247:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250247",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8d631b193c97e44c3aa19b82b78cfefacbed2663d43139afeee3256e3bbf16d2",
	                "LowerDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca046fece97404be279f28506c35572e6b66d2a0b57e70b756461317d0a04310/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250247",
	                "Source": "/var/lib/docker/volumes/newest-cni-250247/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250247",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250247",
	                "name.minikube.sigs.k8s.io": "newest-cni-250247",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19a1ca374f2ac15ceeb8732ad47e7e4e789db7b4dc20ead5353b14dfc8ce4376",
	            "SandboxKey": "/var/run/docker/netns/19a1ca374f2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250247": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:22:6b:2b:a3:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cfffc9981d9cab6ce5981c2e79bfb0dd15ae8455b64d0bfc795000bbbe273d91",
	                    "EndpointID": "6077ce03ce851ef49c2205e3affa2e3c9a93685b0b2e5a16a743470850763606",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250247",
	                        "8d631b193c97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (301.89913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250247 logs -n 25: (1.640534121s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─
────────────────────┐
	│ COMMAND │                                                                                                                            ARGS                                                                                                                            │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─
────────────────────┤
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p embed-certs-716386                                                                                                                                                                                                                                      │ embed-certs-716386           │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ delete  │ -p disable-driver-mounts-122586                                                                                                                                                                                                                            │ disable-driver-mounts-122586 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:15 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:15 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ stop    │ -p default-k8s-diff-port-444714 --alsologtostderr -v=3                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:16 UTC │
	│ start   │ -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:16 UTC │ 02 Dec 25 22:17 UTC │
	│ image   │ default-k8s-diff-port-444714 image list --format=json                                                                                                                                                                                                      │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ pause   │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ unpause │ -p default-k8s-diff-port-444714 --alsologtostderr -v=1                                                                                                                                                                                                     │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ delete  │ -p default-k8s-diff-port-444714                                                                                                                                                                                                                            │ default-k8s-diff-port-444714 │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │ 02 Dec 25 22:17 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:17 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-904303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:21 UTC │                     │
	│ stop    │ -p no-preload-904303 --alsologtostderr -v=3                                                                                                                                                                                                                │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ addons  │ enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │ 02 Dec 25 22:23 UTC │
	│ start   │ -p no-preload-904303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-904303            │ jenkins │ v1.37.0 │ 02 Dec 25 22:23 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-250247 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                    │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:26 UTC │                     │
	│ stop    │ -p newest-cni-250247 --alsologtostderr -v=3                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ addons  │ enable dashboard -p newest-cni-250247 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                               │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │ 02 Dec 25 22:28 UTC │
	│ start   │ -p newest-cni-250247 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:28 UTC │                     │
	│ image   │ newest-cni-250247 image list --format=json                                                                                                                                                                                                                 │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	│ pause   │ -p newest-cni-250247 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	│ unpause │ -p newest-cni-250247 --alsologtostderr -v=1                                                                                                                                                                                                                │ newest-cni-250247            │ jenkins │ v1.37.0 │ 02 Dec 25 22:34 UTC │ 02 Dec 25 22:34 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─
────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:28:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:28:09.982860  546345 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:28:09.982990  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983001  546345 out.go:374] Setting ErrFile to fd 2...
	I1202 22:28:09.983006  546345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:28:09.983258  546345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:28:09.983629  546345 out.go:368] Setting JSON to false
	I1202 22:28:09.984474  546345 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15028,"bootTime":1764699462,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:28:09.984540  546345 start.go:143] virtualization:  
	I1202 22:28:09.987326  546345 out.go:179] * [newest-cni-250247] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:28:09.991071  546345 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:28:09.991190  546345 notify.go:221] Checking for updates...
	I1202 22:28:09.996957  546345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:28:09.999951  546345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:10.003165  546345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:28:10.010024  546345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:28:10.023215  546345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:28:10.026934  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:10.027740  546345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:28:10.065520  546345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:28:10.065629  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.146197  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.137008488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.146302  546345 docker.go:319] overlay module found
	I1202 22:28:10.149701  546345 out.go:179] * Using the docker driver based on existing profile
	I1202 22:28:10.152553  546345 start.go:309] selected driver: docker
	I1202 22:28:10.152579  546345 start.go:927] validating driver "docker" against &{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString
: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.152714  546345 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:28:10.153449  546345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:28:10.206765  546345 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:28:10.197797072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:28:10.207092  546345 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1202 22:28:10.207126  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:10.207191  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:10.207234  546345 start.go:353] cluster config:
	{Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:10.210373  546345 out.go:179] * Starting "newest-cni-250247" primary control-plane node in "newest-cni-250247" cluster
	I1202 22:28:10.213164  546345 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:28:10.216139  546345 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:28:10.218905  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:10.218974  546345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:28:10.241012  546345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:28:10.241034  546345 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	W1202 22:28:10.277912  546345 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	W1202 22:28:10.461684  546345 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
	I1202 22:28:10.461922  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.461950  546345 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462038  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1202 22:28:10.462049  546345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.248µs
	I1202 22:28:10.462062  546345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1202 22:28:10.462074  546345 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462104  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1202 22:28:10.462109  546345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 36.282µs
	I1202 22:28:10.462115  546345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462125  546345 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462157  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1202 22:28:10.462162  546345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 38.727µs
	I1202 22:28:10.462169  546345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462179  546345 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462196  546345 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:28:10.462206  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1202 22:28:10.462212  546345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 34.534µs
	I1202 22:28:10.462218  546345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462227  546345 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462237  546345 start.go:360] acquireMachinesLock for newest-cni-250247: {Name:mk16586a4ea8dcb4ae29d3b0c6fe6a71644be6ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462253  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1202 22:28:10.462258  546345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 32.098µs
	I1202 22:28:10.462265  546345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1202 22:28:10.462274  546345 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462280  546345 start.go:364] duration metric: took 29.16µs to acquireMachinesLock for "newest-cni-250247"
	I1202 22:28:10.462305  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1202 22:28:10.462305  546345 start.go:96] Skipping create...Using existing machine configuration
	I1202 22:28:10.462319  546345 fix.go:54] fixHost starting: 
	I1202 22:28:10.462321  546345 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462350  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
	I1202 22:28:10.462360  546345 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 34.731µs
	I1202 22:28:10.462365  546345 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1202 22:28:10.462378  546345 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:28:10.462404  546345 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1202 22:28:10.462408  546345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 35.396µs
	I1202 22:28:10.462414  546345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1202 22:28:10.462311  546345 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.21µs
	I1202 22:28:10.462504  546345 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1202 22:28:10.462515  546345 cache.go:87] Successfully saved all images to host disk.
	I1202 22:28:10.462628  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.483660  546345 fix.go:112] recreateIfNeeded on newest-cni-250247: state=Stopped err=<nil>
	W1202 22:28:10.483692  546345 fix.go:138] unexpected machine state, will restart: <nil>
	W1202 22:28:08.293846  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:10.294170  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:10.487123  546345 out.go:252] * Restarting existing docker container for "newest-cni-250247" ...
	I1202 22:28:10.487212  546345 cli_runner.go:164] Run: docker start newest-cni-250247
	I1202 22:28:10.752920  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:10.774107  546345 kic.go:430] container "newest-cni-250247" state is running.
	I1202 22:28:10.775430  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:10.803310  546345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/config.json ...
	I1202 22:28:10.803660  546345 machine.go:94] provisionDockerMachine start ...
	I1202 22:28:10.803741  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:10.835254  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:10.835574  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:10.835582  546345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:28:10.836341  546345 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47630->127.0.0.1:33423: read: connection reset by peer
	I1202 22:28:13.985241  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:13.985267  546345 ubuntu.go:182] provisioning hostname "newest-cni-250247"
	I1202 22:28:13.985331  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.004448  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.004830  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.004852  546345 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-250247 && echo "newest-cni-250247" | sudo tee /etc/hostname
	I1202 22:28:14.162890  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-250247
	
	I1202 22:28:14.162970  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.180049  546345 main.go:143] libmachine: Using SSH client type: native
	I1202 22:28:14.180364  546345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1202 22:28:14.180385  546345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250247' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250247/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250247' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:28:14.325738  546345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:28:14.325762  546345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:28:14.325781  546345 ubuntu.go:190] setting up certificates
	I1202 22:28:14.325790  546345 provision.go:84] configureAuth start
	I1202 22:28:14.325861  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:14.342936  546345 provision.go:143] copyHostCerts
	I1202 22:28:14.343009  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:28:14.343017  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:28:14.343091  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:28:14.343188  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:28:14.343193  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:28:14.343217  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:28:14.343264  546345 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:28:14.343269  546345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:28:14.343292  546345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:28:14.343342  546345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250247 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-250247]
	I1202 22:28:14.770203  546345 provision.go:177] copyRemoteCerts
	I1202 22:28:14.770270  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:28:14.770310  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.787300  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:14.893004  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:28:14.909339  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:28:14.926255  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1202 22:28:14.942726  546345 provision.go:87] duration metric: took 616.921074ms to configureAuth
	I1202 22:28:14.942753  546345 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:28:14.942983  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:14.942996  546345 machine.go:97] duration metric: took 4.139308859s to provisionDockerMachine
	I1202 22:28:14.943006  546345 start.go:293] postStartSetup for "newest-cni-250247" (driver="docker")
	I1202 22:28:14.943017  546345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:28:14.943072  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:28:14.943129  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:14.960329  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.069600  546345 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:28:15.072888  546345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:28:15.072916  546345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:28:15.072928  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:28:15.073008  546345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:28:15.073125  546345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:28:15.073236  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:28:15.080571  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:15.098287  546345 start.go:296] duration metric: took 155.265122ms for postStartSetup
	I1202 22:28:15.098433  546345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:28:15.098514  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.116407  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.218632  546345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:28:15.223330  546345 fix.go:56] duration metric: took 4.761004698s for fixHost
	I1202 22:28:15.223357  546345 start.go:83] releasing machines lock for "newest-cni-250247", held for 4.761068204s
	I1202 22:28:15.223423  546345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250247
	I1202 22:28:15.240165  546345 ssh_runner.go:195] Run: cat /version.json
	I1202 22:28:15.240226  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.240474  546345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:28:15.240537  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:15.266111  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.266672  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:15.465947  546345 ssh_runner.go:195] Run: systemctl --version
	I1202 22:28:15.472302  546345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:28:15.476459  546345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:28:15.476528  546345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:28:15.484047  546345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 22:28:15.484071  546345 start.go:496] detecting cgroup driver to use...
	I1202 22:28:15.484132  546345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:28:15.484196  546345 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:28:15.501336  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:28:15.514809  546345 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:28:15.514870  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:28:15.529978  546345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:28:15.542949  546345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:28:15.646754  546345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:28:15.763470  546345 docker.go:234] disabling docker service ...
	I1202 22:28:15.763534  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:28:15.778139  546345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:28:15.790687  546345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:28:15.899099  546345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:28:16.013695  546345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:28:16.027166  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:28:16.044232  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:28:16.054377  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:28:16.064256  546345 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:28:16.064370  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:28:16.074182  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.083929  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:28:16.093428  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:28:16.103465  546345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:28:16.111974  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:28:16.120391  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:28:16.129324  546345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:28:16.138640  546345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:28:16.146079  546345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:28:16.153383  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.258631  546345 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:28:16.349094  546345 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:28:16.349206  546345 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:28:16.353088  546345 start.go:564] Will wait 60s for crictl version
	I1202 22:28:16.353236  546345 ssh_runner.go:195] Run: which crictl
	I1202 22:28:16.356669  546345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:28:16.382942  546345 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:28:16.383050  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.402826  546345 ssh_runner.go:195] Run: containerd --version
	I1202 22:28:16.429935  546345 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
	I1202 22:28:16.432731  546345 cli_runner.go:164] Run: docker network inspect newest-cni-250247 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:28:16.448989  546345 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:28:16.452808  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.464968  546345 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1202 22:28:12.794132  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:15.294790  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:16.467854  546345 kubeadm.go:884] updating cluster {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:28:16.468035  546345 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
	I1202 22:28:16.468117  546345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:28:16.491782  546345 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:28:16.491805  546345 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:28:16.491813  546345 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 containerd true true} ...
	I1202 22:28:16.491914  546345 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-250247 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 22:28:16.491984  546345 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:28:16.515416  546345 cni.go:84] Creating CNI manager for ""
	I1202 22:28:16.515440  546345 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 22:28:16.515457  546345 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1202 22:28:16.515491  546345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250247 NodeName:newest-cni-250247 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:28:16.515606  546345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-250247"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:28:16.515677  546345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1202 22:28:16.522844  546345 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:28:16.522912  546345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:28:16.529836  546345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1202 22:28:16.541819  546345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1202 22:28:16.553461  546345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I1202 22:28:16.565531  546345 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:28:16.569041  546345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:28:16.578309  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:16.682927  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:16.699616  546345 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247 for IP: 192.168.85.2
	I1202 22:28:16.699641  546345 certs.go:195] generating shared ca certs ...
	I1202 22:28:16.699658  546345 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:16.699787  546345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:28:16.699846  546345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:28:16.699857  546345 certs.go:257] generating profile certs ...
	I1202 22:28:16.699953  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/client.key
	I1202 22:28:16.700029  546345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key.dc944fde
	I1202 22:28:16.700095  546345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key
	I1202 22:28:16.700208  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:28:16.700249  546345 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:28:16.700262  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:28:16.700295  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:28:16.700323  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:28:16.700356  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:28:16.700412  546345 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:28:16.701077  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:28:16.721941  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:28:16.740644  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:28:16.759568  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:28:16.776264  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1202 22:28:16.794239  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 22:28:16.814293  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:28:16.833481  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/newest-cni-250247/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:28:16.852733  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:28:16.870078  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:28:16.886149  546345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:28:16.902507  546345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:28:16.913942  546345 ssh_runner.go:195] Run: openssl version
	I1202 22:28:16.919938  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:28:16.927825  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931606  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.931675  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:28:16.974237  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:28:16.981828  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:28:16.989638  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.992999  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:28:16.993061  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:28:17.033731  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:28:17.041307  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:28:17.049114  546345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052710  546345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.052816  546345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:28:17.093368  546345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:28:17.101039  546345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:28:17.104530  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 22:28:17.145234  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 22:28:17.186252  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 22:28:17.227251  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 22:28:17.270184  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 22:28:17.315680  546345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 22:28:17.356357  546345 kubeadm.go:401] StartCluster: {Name:newest-cni-250247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-250247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:28:17.356449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:28:17.356551  546345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:28:17.384974  546345 cri.go:89] found id: ""
	I1202 22:28:17.385084  546345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:28:17.392914  546345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 22:28:17.392983  546345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 22:28:17.393055  546345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 22:28:17.400365  546345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 22:28:17.400969  546345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250247" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.401222  546345 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-261381/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250247" cluster setting kubeconfig missing "newest-cni-250247" context setting]
	I1202 22:28:17.401752  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.403065  546345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 22:28:17.410696  546345 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1202 22:28:17.410762  546345 kubeadm.go:602] duration metric: took 17.7594ms to restartPrimaryControlPlane
	I1202 22:28:17.410793  546345 kubeadm.go:403] duration metric: took 54.438388ms to StartCluster
	I1202 22:28:17.410829  546345 settings.go:142] acquiring lock: {Name:mk484fa83ac7553aeb154b510943680cadb4046e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.410902  546345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:28:17.412749  546345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/kubeconfig: {Name:mkd781d443d19bae09968a7cbb437a2381c536f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:28:17.413013  546345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:28:17.416416  546345 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 22:28:17.416535  546345 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-250247"
	I1202 22:28:17.416566  546345 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-250247"
	I1202 22:28:17.416596  546345 config.go:182] Loaded profile config "newest-cni-250247": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:28:17.416607  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.416873  546345 addons.go:70] Setting dashboard=true in profile "newest-cni-250247"
	I1202 22:28:17.416893  546345 addons.go:239] Setting addon dashboard=true in "newest-cni-250247"
	W1202 22:28:17.416900  546345 addons.go:248] addon dashboard should already be in state true
	I1202 22:28:17.416923  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.417319  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.417762  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.418220  546345 addons.go:70] Setting default-storageclass=true in profile "newest-cni-250247"
	I1202 22:28:17.418240  546345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250247"
	I1202 22:28:17.418515  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.421722  546345 out.go:179] * Verifying Kubernetes components...
	I1202 22:28:17.424546  546345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:28:17.473567  546345 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1202 22:28:17.473567  546345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 22:28:17.475110  546345 addons.go:239] Setting addon default-storageclass=true in "newest-cni-250247"
	I1202 22:28:17.475145  546345 host.go:66] Checking if "newest-cni-250247" exists ...
	I1202 22:28:17.475548  546345 cli_runner.go:164] Run: docker container inspect newest-cni-250247 --format={{.State.Status}}
	I1202 22:28:17.477614  546345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.477633  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 22:28:17.477833  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.481801  546345 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1202 22:28:17.489727  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1202 22:28:17.489757  546345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1202 22:28:17.489831  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.519689  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.519729  546345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.519742  546345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 22:28:17.519796  546345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250247
	I1202 22:28:17.551180  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.565506  546345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/newest-cni-250247/id_rsa Username:docker}
	I1202 22:28:17.644850  546345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:28:17.726531  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:17.763912  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 22:28:17.792014  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1202 22:28:17.792042  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1202 22:28:17.824225  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1202 22:28:17.824250  546345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1202 22:28:17.838468  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1202 22:28:17.838492  546345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1202 22:28:17.851940  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1202 22:28:17.851965  546345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1202 22:28:17.864211  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1202 22:28:17.864276  546345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1202 22:28:17.876057  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1202 22:28:17.876079  546345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1202 22:28:17.887797  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1202 22:28:17.887867  546345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1202 22:28:17.899526  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1202 22:28:17.899547  546345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1202 22:28:17.911602  546345 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:17.911626  546345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1202 22:28:17.923996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:18.303299  546345 api_server.go:52] waiting for apiserver process to appear ...
	I1202 22:28:18.303418  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:18.303565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303612  546345 retry.go:31] will retry after 133.710161ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303717  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.303748  546345 retry.go:31] will retry after 138.021594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.303974  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.304008  546345 retry.go:31] will retry after 237.208538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.438371  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:18.442705  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:18.512074  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.512108  546345 retry.go:31] will retry after 489.996663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:18.521184  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.521218  546345 retry.go:31] will retry after 506.041741ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.542348  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:18.605737  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.605775  546345 retry.go:31] will retry after 347.613617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:18.804191  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:18.953629  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:19.003207  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.021755  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.021793  546345 retry.go:31] will retry after 285.211473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.028084  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:19.152805  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.152839  546345 retry.go:31] will retry after 301.33995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:19.169007  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.169038  546345 retry.go:31] will retry after 787.522923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.304323  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.307756  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:19.364720  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.364752  546345 retry.go:31] will retry after 744.498002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.454779  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:19.514605  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.514684  546345 retry.go:31] will retry after 936.080491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:19.803793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:19.957439  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:17.793953  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.293990  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:20.022370  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.022406  546345 retry.go:31] will retry after 798.963887ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.109555  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:20.176777  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.176873  546345 retry.go:31] will retry after 799.677911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.303906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.451319  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:20.513056  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.513087  546345 retry.go:31] will retry after 774.001274ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.804493  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:20.822263  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:20.884574  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.884663  546345 retry.go:31] will retry after 1.794003449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:20.976884  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:21.043200  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.043233  546345 retry.go:31] will retry after 2.577364105s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.287368  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:21.303812  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:21.396263  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.396297  546345 retry.go:31] will retry after 1.406655136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:21.803778  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.303682  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:22.678940  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.734117  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.734151  546345 retry.go:31] will retry after 2.241021271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.803453  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:22.803660  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:22.908987  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:22.909065  546345 retry.go:31] will retry after 2.592452064s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.304587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:23.621298  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:23.681960  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.681992  546345 retry.go:31] will retry after 4.002263162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:23.804126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.303637  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.803614  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:24.976147  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:22.793981  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:25.036436  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.036470  546345 retry.go:31] will retry after 3.520246776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.303592  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:25.502542  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:25.567000  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.567033  546345 retry.go:31] will retry after 5.323254411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:25.804224  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.304369  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:26.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.303952  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:27.684919  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:27.748186  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.748220  546345 retry.go:31] will retry after 5.733866836s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:27.804400  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.304209  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:28.556915  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:28.614437  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.614469  546345 retry.go:31] will retry after 5.59146354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:28.803555  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.303563  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:29.803564  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:27.794055  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:29.794270  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:32.293942  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:30.304278  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.803599  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:30.891315  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:30.954133  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:30.954165  546345 retry.go:31] will retry after 6.008326018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:31.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:31.803766  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.304456  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:32.804272  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.304447  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:33.482755  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:28:33.544609  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.544640  546345 retry.go:31] will retry after 5.236447557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:33.804125  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.206989  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:34.267528  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.267562  546345 retry.go:31] will retry after 5.128568146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:34.303642  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:34.804011  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:34.793866  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:36.794018  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:35.304181  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:35.803881  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.304159  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.804539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:36.963637  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:28:37.037814  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.037848  546345 retry.go:31] will retry after 8.195284378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:37.304208  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:37.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.303552  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:38.781347  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:38.803757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:38.846454  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:38.846487  546345 retry.go:31] will retry after 10.92120738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.304100  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:39.396834  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:39.454859  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.454893  546345 retry.go:31] will retry after 6.04045657s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:39.804469  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:39.293843  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:41.293938  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:40.303596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:40.804541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.303922  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:41.803906  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.304508  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:42.804313  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.304463  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:43.803539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.304169  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:44.803620  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:43.294289  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:45.294597  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:47.294896  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:45.235996  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:45.303907  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:45.410878  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.410909  546345 retry.go:31] will retry after 9.368309576s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.496112  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:45.553672  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.553705  546345 retry.go:31] will retry after 7.750202952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:45.804015  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.303559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:46.804327  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.303603  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:47.804053  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.303550  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:48.803634  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.303688  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:49.768489  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1202 22:28:49.804064  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:49.895914  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:49.895948  546345 retry.go:31] will retry after 11.070404971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:49.794091  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:51.794902  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:50.304462  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:50.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.304256  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:51.804118  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.304451  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:52.804096  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.303837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:53.304041  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:28:53.361880  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.361915  546345 retry.go:31] will retry after 21.51867829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:53.804496  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.303718  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:54.779367  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 22:28:54.803837  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:54.852160  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:28:54.852195  546345 retry.go:31] will retry after 25.514460464s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:28:54.293970  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:28:56.294081  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:28:55.303807  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:55.804288  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.304329  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:56.803616  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.303836  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:57.804152  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.304034  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:58.803992  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.304109  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:28:59.804084  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:28:58.793961  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:01.293995  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:00.305594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.803492  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:00.967275  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:01.023919  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.023952  546345 retry.go:31] will retry after 14.799716379s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:01.304168  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:01.804346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.304261  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:02.803541  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.304078  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:03.804260  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.304145  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:04.803593  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:03.793972  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:05.794096  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:05.304303  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:05.804290  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.304157  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:06.804297  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.304486  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:07.803594  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.303514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:08.803514  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.304264  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:09.804046  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1202 22:29:08.294013  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:10.794007  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:10.304151  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:10.804338  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.304108  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:11.803600  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.304520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:12.804189  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.304155  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:13.803517  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.304548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.803761  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:14.881559  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:14.937730  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:14.937760  546345 retry.go:31] will retry after 41.941175985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:13.294025  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:15.301168  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:15.316948  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.804548  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:15.823888  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:15.884943  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:15.884976  546345 retry.go:31] will retry after 35.611848449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:16.303570  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:16.803687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.304005  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:17.804234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:17.804335  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:17.829227  546345 cri.go:89] found id: ""
	I1202 22:29:17.829257  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.829265  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:17.829272  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:17.829332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:17.853121  546345 cri.go:89] found id: ""
	I1202 22:29:17.853146  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.853154  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:17.853161  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:17.853219  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:17.877170  546345 cri.go:89] found id: ""
	I1202 22:29:17.877195  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.877204  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:17.877210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:17.877267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:17.904673  546345 cri.go:89] found id: ""
	I1202 22:29:17.904698  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.904707  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:17.904717  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:17.904784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:17.928244  546345 cri.go:89] found id: ""
	I1202 22:29:17.928284  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.928294  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:17.928301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:17.928363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:17.951262  546345 cri.go:89] found id: ""
	I1202 22:29:17.951283  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.951292  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:17.951299  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:17.951363  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:17.979941  546345 cri.go:89] found id: ""
	I1202 22:29:17.979971  546345 logs.go:282] 0 containers: []
	W1202 22:29:17.979980  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:17.979987  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:17.980046  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:18.014330  546345 cri.go:89] found id: ""
	I1202 22:29:18.014352  546345 logs.go:282] 0 containers: []
	W1202 22:29:18.014361  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:18.014370  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:18.014382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:18.070623  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:18.070659  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:18.086453  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:18.086483  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:18.147206  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:18.139601    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.140184    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.141932    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.142471    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:18.144157    1866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:18.147229  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:18.147242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:18.171557  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:18.171592  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:17.794066  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:20.293905  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:22.293952  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:20.367703  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:29:20.422565  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.422597  546345 retry.go:31] will retry after 40.968515426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1202 22:29:20.701050  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:20.711132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:20.711213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:20.734019  546345 cri.go:89] found id: ""
	I1202 22:29:20.734042  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.734050  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:20.734057  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:20.734114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:20.756521  546345 cri.go:89] found id: ""
	I1202 22:29:20.756546  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.756554  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:20.756561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:20.756620  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:20.787826  546345 cri.go:89] found id: ""
	I1202 22:29:20.787852  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.787869  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:20.787876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:20.787939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:20.811402  546345 cri.go:89] found id: ""
	I1202 22:29:20.811427  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.811435  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:20.811441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:20.811500  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:20.835289  546345 cri.go:89] found id: ""
	I1202 22:29:20.835314  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.835322  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:20.835329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:20.835404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:20.858522  546345 cri.go:89] found id: ""
	I1202 22:29:20.858548  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.858556  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:20.858563  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:20.858622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:20.883759  546345 cri.go:89] found id: ""
	I1202 22:29:20.883783  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.883791  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:20.883798  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:20.883857  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:20.907968  546345 cri.go:89] found id: ""
	I1202 22:29:20.907992  546345 logs.go:282] 0 containers: []
	W1202 22:29:20.908001  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:20.908010  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:20.908020  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:20.962992  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:20.963028  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:20.978472  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:20.978499  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:21.039749  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:21.032843    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.033345    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.034809    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.035236    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:21.036659    1984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:21.039771  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:21.039784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:21.064157  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:21.064194  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:23.595745  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:23.606920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:23.606996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:23.633420  546345 cri.go:89] found id: ""
	I1202 22:29:23.633450  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.633459  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:23.633473  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:23.633532  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:23.659559  546345 cri.go:89] found id: ""
	I1202 22:29:23.659581  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.659590  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:23.659596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:23.659663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:23.684986  546345 cri.go:89] found id: ""
	I1202 22:29:23.685010  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.685031  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:23.685039  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:23.685099  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:23.709487  546345 cri.go:89] found id: ""
	I1202 22:29:23.709560  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.709583  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:23.709604  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:23.709734  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:23.734133  546345 cri.go:89] found id: ""
	I1202 22:29:23.734159  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.734167  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:23.734173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:23.734233  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:23.758126  546345 cri.go:89] found id: ""
	I1202 22:29:23.758190  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.758213  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:23.758234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:23.758327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:23.782448  546345 cri.go:89] found id: ""
	I1202 22:29:23.782471  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.782480  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:23.782505  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:23.782579  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:23.806736  546345 cri.go:89] found id: ""
	I1202 22:29:23.806761  546345 logs.go:282] 0 containers: []
	W1202 22:29:23.806770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:23.806780  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:23.806790  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:23.865578  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:23.865619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:23.881434  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:23.881470  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:23.944584  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:23.936843    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.937517    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939081    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.939622    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:23.941360    2098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:23.944606  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:23.944619  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:23.970159  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:23.970207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1202 22:29:24.793885  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	W1202 22:29:26.794021  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:26.498138  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:26.508783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:26.508852  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:26.537015  546345 cri.go:89] found id: ""
	I1202 22:29:26.537037  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.537046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:26.537053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:26.537110  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:26.574312  546345 cri.go:89] found id: ""
	I1202 22:29:26.574339  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.574347  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:26.574354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:26.574411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:26.629052  546345 cri.go:89] found id: ""
	I1202 22:29:26.629079  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.629087  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:26.629094  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:26.629150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:26.658217  546345 cri.go:89] found id: ""
	I1202 22:29:26.658251  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.658259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:26.658266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:26.658337  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:26.681717  546345 cri.go:89] found id: ""
	I1202 22:29:26.681751  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.681760  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:26.681778  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:26.681850  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:26.704611  546345 cri.go:89] found id: ""
	I1202 22:29:26.704646  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.704655  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:26.704661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:26.704733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:26.728028  546345 cri.go:89] found id: ""
	I1202 22:29:26.728091  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.728115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:26.728137  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:26.728223  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:26.755557  546345 cri.go:89] found id: ""
	I1202 22:29:26.755582  546345 logs.go:282] 0 containers: []
	W1202 22:29:26.755590  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:26.755600  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:26.755611  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:26.786053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:26.786080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:26.841068  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:26.841100  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:26.856799  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:26.856829  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:26.924274  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:26.913901    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.914406    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.918374    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.919140    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:26.920188    2224 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:26.924338  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:26.924358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.449918  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:29.460186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:29.460259  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:29.483893  546345 cri.go:89] found id: ""
	I1202 22:29:29.483915  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.483924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:29.483930  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:29.483990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:29.507973  546345 cri.go:89] found id: ""
	I1202 22:29:29.507999  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.508007  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:29.508013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:29.508073  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:29.532020  546345 cri.go:89] found id: ""
	I1202 22:29:29.532045  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.532054  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:29.532061  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:29.532119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:29.583563  546345 cri.go:89] found id: ""
	I1202 22:29:29.583590  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.583599  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:29.583606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:29.583664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:29.626796  546345 cri.go:89] found id: ""
	I1202 22:29:29.626821  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.626830  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:29.626837  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:29.626910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:29.650151  546345 cri.go:89] found id: ""
	I1202 22:29:29.650179  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.650186  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:29.650193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:29.650254  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:29.677989  546345 cri.go:89] found id: ""
	I1202 22:29:29.678015  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.678023  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:29.678031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:29.678090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:29.707431  546345 cri.go:89] found id: ""
	I1202 22:29:29.707457  546345 logs.go:282] 0 containers: []
	W1202 22:29:29.707465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:29.707475  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:29.707486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:29.773447  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:29.766251    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.766804    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768321    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.768748    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:29.770331    2324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:29.773470  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:29.773484  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:29.798530  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:29.798604  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:29.825490  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:29.825517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:29.884423  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:29.884461  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1202 22:29:28.794762  539599 node_ready.go:55] error getting node "no-preload-904303" condition "Ready" status (will retry): Get "https://192.168.76.2:8443/api/v1/nodes/no-preload-904303": dial tcp 192.168.76.2:8443: connect: connection refused
	I1202 22:29:30.793709  539599 node_ready.go:38] duration metric: took 6m0.000289785s for node "no-preload-904303" to be "Ready" ...
	I1202 22:29:30.796935  539599 out.go:203] 
	W1202 22:29:30.799794  539599 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1202 22:29:30.799816  539599 out.go:285] * 
	W1202 22:29:30.802151  539599 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1202 22:29:30.804961  539599 out.go:203] 
	I1202 22:29:32.401788  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:32.413697  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:32.413768  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:32.447463  546345 cri.go:89] found id: ""
	I1202 22:29:32.447486  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.447494  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:32.447501  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:32.447560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:32.480451  546345 cri.go:89] found id: ""
	I1202 22:29:32.480473  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.480481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:32.480487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:32.480543  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:32.518559  546345 cri.go:89] found id: ""
	I1202 22:29:32.518581  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.518590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:32.518596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:32.518652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:32.570716  546345 cri.go:89] found id: ""
	I1202 22:29:32.570737  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.570746  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:32.570752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:32.570809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:32.612686  546345 cri.go:89] found id: ""
	I1202 22:29:32.612722  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.612731  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:32.612738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:32.612797  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:32.651570  546345 cri.go:89] found id: ""
	I1202 22:29:32.651592  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.651600  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:32.651607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:32.651671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:32.679451  546345 cri.go:89] found id: ""
	I1202 22:29:32.679475  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.679484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:32.679490  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:32.679552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:32.705124  546345 cri.go:89] found id: ""
	I1202 22:29:32.705149  546345 logs.go:282] 0 containers: []
	W1202 22:29:32.705170  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:32.705180  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:32.705193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:32.772557  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:32.763930    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.764653    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766262    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.766778    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:32.768469    2432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:32.772578  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:32.772590  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:32.798210  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:32.798246  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:32.826270  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:32.826298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:32.885460  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:32.885496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:35.401743  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:35.412979  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:35.413051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:35.438650  546345 cri.go:89] found id: ""
	I1202 22:29:35.438684  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.438703  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:35.438710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:35.438787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:35.467326  546345 cri.go:89] found id: ""
	I1202 22:29:35.467350  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.467358  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:35.467365  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:35.467444  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:35.492513  546345 cri.go:89] found id: ""
	I1202 22:29:35.492546  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.492554  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:35.492561  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:35.492659  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:35.517758  546345 cri.go:89] found id: ""
	I1202 22:29:35.517785  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.517794  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:35.517801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:35.517861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:35.564303  546345 cri.go:89] found id: ""
	I1202 22:29:35.564329  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.564338  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:35.564345  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:35.564431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:35.610173  546345 cri.go:89] found id: ""
	I1202 22:29:35.610253  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.610289  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:35.610311  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:35.610412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:35.647481  546345 cri.go:89] found id: ""
	I1202 22:29:35.647545  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.647560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:35.647567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:35.647628  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:35.671535  546345 cri.go:89] found id: ""
	I1202 22:29:35.671561  546345 logs.go:282] 0 containers: []
	W1202 22:29:35.671569  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:35.671579  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:35.671591  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:35.736069  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:35.728833    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.729443    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.730886    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.731384    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:35.732973    2547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:35.736092  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:35.736106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:35.760759  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:35.760794  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:35.786652  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:35.786678  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:35.842999  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:35.843035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.358963  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:38.369060  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:38.369123  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:38.400304  546345 cri.go:89] found id: ""
	I1202 22:29:38.400330  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.400339  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:38.400351  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:38.400407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:38.424847  546345 cri.go:89] found id: ""
	I1202 22:29:38.424873  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.424881  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:38.424888  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:38.424946  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:38.452445  546345 cri.go:89] found id: ""
	I1202 22:29:38.452472  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.452481  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:38.452487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:38.452544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:38.480761  546345 cri.go:89] found id: ""
	I1202 22:29:38.480783  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.480804  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:38.480811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:38.480870  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:38.505019  546345 cri.go:89] found id: ""
	I1202 22:29:38.505044  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.505052  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:38.505059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:38.505116  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:38.528009  546345 cri.go:89] found id: ""
	I1202 22:29:38.528036  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.528045  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:38.528052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:38.528109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:38.595576  546345 cri.go:89] found id: ""
	I1202 22:29:38.595598  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.595606  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:38.595613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:38.595671  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:38.638153  546345 cri.go:89] found id: ""
	I1202 22:29:38.638177  546345 logs.go:282] 0 containers: []
	W1202 22:29:38.638186  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:38.638195  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:38.638206  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:38.653639  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:38.653696  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:38.715223  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:38.707314    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.708623    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.709486    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.710286    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:38.711017    2663 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:38.715245  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:38.715258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:38.739162  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:38.739196  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:38.766317  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:38.766345  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.321520  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:41.331550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:41.331636  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:41.355934  546345 cri.go:89] found id: ""
	I1202 22:29:41.355959  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.355968  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:41.355975  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:41.356035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:41.381232  546345 cri.go:89] found id: ""
	I1202 22:29:41.381254  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.381263  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:41.381269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:41.381325  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:41.406147  546345 cri.go:89] found id: ""
	I1202 22:29:41.406171  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.406179  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:41.406186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:41.406246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:41.435516  546345 cri.go:89] found id: ""
	I1202 22:29:41.435542  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.435551  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:41.435559  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:41.435619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:41.460909  546345 cri.go:89] found id: ""
	I1202 22:29:41.460932  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.460941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:41.460948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:41.461035  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:41.487520  546345 cri.go:89] found id: ""
	I1202 22:29:41.487553  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.487570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:41.487577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:41.487648  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:41.512354  546345 cri.go:89] found id: ""
	I1202 22:29:41.512425  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.512449  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:41.512469  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:41.512552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:41.536885  546345 cri.go:89] found id: ""
	I1202 22:29:41.536908  546345 logs.go:282] 0 containers: []
	W1202 22:29:41.536917  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:41.536927  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:41.536938  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:41.607465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:41.607514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:41.635996  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:41.636025  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:41.712077  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:41.704951    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.705647    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707121    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.707514    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:41.708659    2777 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:41.712100  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:41.712113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:41.736613  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:41.736660  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.265095  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:44.276615  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:44.276703  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:44.303304  546345 cri.go:89] found id: ""
	I1202 22:29:44.303325  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.303334  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:44.303340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:44.303403  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:44.333144  546345 cri.go:89] found id: ""
	I1202 22:29:44.333167  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.333176  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:44.333182  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:44.333258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:44.359646  546345 cri.go:89] found id: ""
	I1202 22:29:44.359675  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.359684  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:44.359691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:44.359751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:44.384230  546345 cri.go:89] found id: ""
	I1202 22:29:44.384255  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.384264  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:44.384270  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:44.384342  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:44.409648  546345 cri.go:89] found id: ""
	I1202 22:29:44.409701  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.409711  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:44.409718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:44.409776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:44.434410  546345 cri.go:89] found id: ""
	I1202 22:29:44.434437  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.434446  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:44.434452  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:44.434512  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:44.458352  546345 cri.go:89] found id: ""
	I1202 22:29:44.458376  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.458385  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:44.458392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:44.458465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:44.486353  546345 cri.go:89] found id: ""
	I1202 22:29:44.486385  546345 logs.go:282] 0 containers: []
	W1202 22:29:44.486396  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:44.486420  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:44.486436  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:44.510698  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:44.510737  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:44.552264  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:44.552293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:44.660418  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:44.660451  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:44.676162  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:44.676230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:44.741313  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:44.734563    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.735043    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736515    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.736835    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:44.738249    2902 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.241695  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:47.253909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:47.253977  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:47.280127  546345 cri.go:89] found id: ""
	I1202 22:29:47.280151  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.280159  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:47.280166  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:47.280227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:47.309688  546345 cri.go:89] found id: ""
	I1202 22:29:47.309711  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.309719  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:47.309726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:47.309795  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:47.334233  546345 cri.go:89] found id: ""
	I1202 22:29:47.334259  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.334268  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:47.334275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:47.334330  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:47.363203  546345 cri.go:89] found id: ""
	I1202 22:29:47.363228  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.363237  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:47.363245  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:47.363314  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:47.390073  546345 cri.go:89] found id: ""
	I1202 22:29:47.390096  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.390104  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:47.390111  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:47.390168  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:47.416413  546345 cri.go:89] found id: ""
	I1202 22:29:47.416435  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.416444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:47.416451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:47.416518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:47.440718  546345 cri.go:89] found id: ""
	I1202 22:29:47.440743  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.440753  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:47.440759  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:47.440818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:47.463875  546345 cri.go:89] found id: ""
	I1202 22:29:47.463901  546345 logs.go:282] 0 containers: []
	W1202 22:29:47.463910  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:47.463920  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:47.463931  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:47.492814  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:47.492842  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:47.558225  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:47.558264  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:47.574145  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:47.574174  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:47.666298  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:47.658677    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.659357    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.660936    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.661477    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:47.663047    3015 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:47.666357  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:47.666385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.191511  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:50.202178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:50.202258  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:50.229173  546345 cri.go:89] found id: ""
	I1202 22:29:50.229213  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.229222  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:50.229228  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:50.229293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:50.253932  546345 cri.go:89] found id: ""
	I1202 22:29:50.253962  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.253971  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:50.253977  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:50.254033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:50.278257  546345 cri.go:89] found id: ""
	I1202 22:29:50.278280  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.278289  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:50.278296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:50.278351  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:50.306884  546345 cri.go:89] found id: ""
	I1202 22:29:50.306907  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.306914  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:50.306921  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:50.306989  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:50.331454  546345 cri.go:89] found id: ""
	I1202 22:29:50.331528  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.331553  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:50.331566  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:50.331658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:50.355157  546345 cri.go:89] found id: ""
	I1202 22:29:50.355230  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.355254  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:50.355268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:50.355346  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:50.380390  546345 cri.go:89] found id: ""
	I1202 22:29:50.380415  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.380424  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:50.380430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:50.380518  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:50.408708  546345 cri.go:89] found id: ""
	I1202 22:29:50.408733  546345 logs.go:282] 0 containers: []
	W1202 22:29:50.408742  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:50.408751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:50.408800  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:50.466607  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:50.466641  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:50.482087  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:50.482154  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:50.548310  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:50.537223    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.537900    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.541639    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.542300    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:50.543919    3108 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:50.548334  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:50.548347  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:50.581455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:50.581492  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:51.497099  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1202 22:29:51.556470  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:51.556588  546345 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:53.133025  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:53.143115  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:53.143180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:53.166147  546345 cri.go:89] found id: ""
	I1202 22:29:53.166169  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.166177  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:53.166183  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:53.166251  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:53.191215  546345 cri.go:89] found id: ""
	I1202 22:29:53.191238  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.191247  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:53.191253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:53.191329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:53.214527  546345 cri.go:89] found id: ""
	I1202 22:29:53.214593  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.214616  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:53.214631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:53.214701  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:53.239062  546345 cri.go:89] found id: ""
	I1202 22:29:53.239089  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.239098  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:53.239105  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:53.239270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:53.269346  546345 cri.go:89] found id: ""
	I1202 22:29:53.269416  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.269440  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:53.269462  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:53.269571  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:53.293728  546345 cri.go:89] found id: ""
	I1202 22:29:53.293802  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.293825  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:53.293845  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:53.293942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:53.322079  546345 cri.go:89] found id: ""
	I1202 22:29:53.322106  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.322115  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:53.322121  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:53.322180  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:53.345988  546345 cri.go:89] found id: ""
	I1202 22:29:53.346055  546345 logs.go:282] 0 containers: []
	W1202 22:29:53.346079  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:53.346103  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:53.346128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:53.402872  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:53.402909  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:53.418121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:53.418150  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:53.480652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:53.472986    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.473648    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475387    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.475778    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:53.477212    3229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:53.480725  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:53.480756  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:53.505378  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:53.505414  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.037255  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:56.048340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:56.048412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:56.080851  546345 cri.go:89] found id: ""
	I1202 22:29:56.080878  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.080888  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:56.080894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:56.080963  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:56.105446  546345 cri.go:89] found id: ""
	I1202 22:29:56.105472  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.105481  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:56.105488  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:56.105545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:56.131318  546345 cri.go:89] found id: ""
	I1202 22:29:56.131344  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.131352  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:56.131358  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:56.131414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:56.159096  546345 cri.go:89] found id: ""
	I1202 22:29:56.159118  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.159126  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:56.159132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:56.159191  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:56.183173  546345 cri.go:89] found id: ""
	I1202 22:29:56.183199  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.183207  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:56.183214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:56.183279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:56.207984  546345 cri.go:89] found id: ""
	I1202 22:29:56.208017  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.208029  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:56.208035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:56.208095  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:56.232594  546345 cri.go:89] found id: ""
	I1202 22:29:56.232617  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.232625  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:56.232632  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:56.232699  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:56.257221  546345 cri.go:89] found id: ""
	I1202 22:29:56.257247  546345 logs.go:282] 0 containers: []
	W1202 22:29:56.257256  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:56.257265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:56.257278  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:29:56.283035  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:56.283061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:56.339962  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:56.339997  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:56.355699  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:56.355773  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:56.414625  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:56.408245    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.408723    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.409828    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.410193    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:56.411567    3354 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:56.414693  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:56.414738  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:56.879279  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1202 22:29:56.938440  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:29:56.938561  546345 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:29:58.938802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:29:58.951366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:29:58.951487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:29:58.978893  546345 cri.go:89] found id: ""
	I1202 22:29:58.978916  546345 logs.go:282] 0 containers: []
	W1202 22:29:58.978924  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:29:58.978931  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:29:58.978990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:29:59.005270  546345 cri.go:89] found id: ""
	I1202 22:29:59.005299  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.005309  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:29:59.005316  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:29:59.005396  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:29:59.029424  546345 cri.go:89] found id: ""
	I1202 22:29:59.029453  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.029461  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:29:59.029468  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:29:59.029525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:29:59.053363  546345 cri.go:89] found id: ""
	I1202 22:29:59.053398  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.053407  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:29:59.053414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:29:59.053481  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:29:59.078974  546345 cri.go:89] found id: ""
	I1202 22:29:59.079051  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.079073  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:29:59.079088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:29:59.079162  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:29:59.103336  546345 cri.go:89] found id: ""
	I1202 22:29:59.103358  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.103366  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:29:59.103383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:29:59.103441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:29:59.127855  546345 cri.go:89] found id: ""
	I1202 22:29:59.127929  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.127952  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:29:59.127972  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:29:59.128077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:29:59.151167  546345 cri.go:89] found id: ""
	I1202 22:29:59.151196  546345 logs.go:282] 0 containers: []
	W1202 22:29:59.151204  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:29:59.151213  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:29:59.151224  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:29:59.208516  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:29:59.208559  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:29:59.224755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:29:59.224780  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:29:59.286748  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:29:59.279244    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.279739    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281332    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.281754    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:29:59.283394    3467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:29:59.286772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:29:59.286787  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:29:59.311855  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:29:59.311889  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:01.391459  546345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1202 22:30:01.475431  546345 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1202 22:30:01.475652  546345 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1202 22:30:01.478828  546345 out.go:179] * Enabled addons: 
	I1202 22:30:01.482057  546345 addons.go:530] duration metric: took 1m44.065625472s for enable addons: enabled=[]
	I1202 22:30:01.843006  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:01.854584  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:01.854684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:01.885465  546345 cri.go:89] found id: ""
	I1202 22:30:01.885501  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.885510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:01.885517  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:01.885587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:01.917316  546345 cri.go:89] found id: ""
	I1202 22:30:01.917348  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.917359  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:01.917366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:01.917463  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:01.943052  546345 cri.go:89] found id: ""
	I1202 22:30:01.943078  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.943086  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:01.943093  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:01.943153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:01.969294  546345 cri.go:89] found id: ""
	I1202 22:30:01.969321  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.969330  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:01.969339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:01.969402  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:01.996336  546345 cri.go:89] found id: ""
	I1202 22:30:01.996405  546345 logs.go:282] 0 containers: []
	W1202 22:30:01.996428  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:01.996449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:01.996537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:02.025075  546345 cri.go:89] found id: ""
	I1202 22:30:02.025158  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.025183  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:02.025203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:02.025300  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:02.078384  546345 cri.go:89] found id: ""
	I1202 22:30:02.078450  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.078474  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:02.078493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:02.078585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:02.124922  546345 cri.go:89] found id: ""
	I1202 22:30:02.125001  546345 logs.go:282] 0 containers: []
	W1202 22:30:02.125021  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:02.125031  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:02.125044  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:02.197595  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:02.188806    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.189743    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.191423    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.192018    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:02.193637    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:02.197618  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:02.197634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:02.223170  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:02.223203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:02.255281  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:02.255348  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:02.310654  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:02.310690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:04.828623  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:04.839157  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:04.839282  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:04.863863  546345 cri.go:89] found id: ""
	I1202 22:30:04.863887  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.863896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:04.863903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:04.863996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:04.890006  546345 cri.go:89] found id: ""
	I1202 22:30:04.890031  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.890040  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:04.890047  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:04.890146  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:04.915998  546345 cri.go:89] found id: ""
	I1202 22:30:04.916021  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.916035  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:04.916042  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:04.916100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:04.940395  546345 cri.go:89] found id: ""
	I1202 22:30:04.940420  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.940429  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:04.940435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:04.940495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:04.964621  546345 cri.go:89] found id: ""
	I1202 22:30:04.964650  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.964660  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:04.964667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:04.964737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:04.989632  546345 cri.go:89] found id: ""
	I1202 22:30:04.989685  546345 logs.go:282] 0 containers: []
	W1202 22:30:04.989694  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:04.989702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:04.989760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:05.019501  546345 cri.go:89] found id: ""
	I1202 22:30:05.019528  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.019537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:05.019545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:05.019610  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:05.049637  546345 cri.go:89] found id: ""
	I1202 22:30:05.049682  546345 logs.go:282] 0 containers: []
	W1202 22:30:05.049690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:05.049700  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:05.049711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:05.088244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:05.088281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:05.133381  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:05.133409  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:05.194841  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:05.194874  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:05.210533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:05.210560  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:05.273348  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:05.265533    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.265959    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.267751    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.268062    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:05.270006    3707 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:07.774501  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:07.784828  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:07.784927  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:07.814568  546345 cri.go:89] found id: ""
	I1202 22:30:07.814610  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.814619  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:07.814627  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:07.814711  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:07.839281  546345 cri.go:89] found id: ""
	I1202 22:30:07.839306  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.839325  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:07.839333  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:07.839410  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:07.863734  546345 cri.go:89] found id: ""
	I1202 22:30:07.863756  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.863764  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:07.863771  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:07.863830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:07.887517  546345 cri.go:89] found id: ""
	I1202 22:30:07.887541  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.887549  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:07.887556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:07.887615  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:07.912577  546345 cri.go:89] found id: ""
	I1202 22:30:07.912599  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.912608  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:07.912614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:07.912684  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:07.937037  546345 cri.go:89] found id: ""
	I1202 22:30:07.937062  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.937071  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:07.937088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:07.937153  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:07.961873  546345 cri.go:89] found id: ""
	I1202 22:30:07.961901  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.961910  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:07.961916  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:07.961974  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:07.985864  546345 cri.go:89] found id: ""
	I1202 22:30:07.985890  546345 logs.go:282] 0 containers: []
	W1202 22:30:07.985906  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:07.985917  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:07.985928  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:08.011244  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:08.011284  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:08.055290  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:08.055321  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:08.134015  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:08.134069  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:08.154013  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:08.154041  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:08.223778  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:08.216502    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.217150    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.218711    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.219222    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:08.220667    3819 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:10.723964  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:10.736098  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:10.736214  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:10.761205  546345 cri.go:89] found id: ""
	I1202 22:30:10.761227  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.761236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:10.761243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:10.761303  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:10.785829  546345 cri.go:89] found id: ""
	I1202 22:30:10.785856  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.785865  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:10.785872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:10.785931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:10.815724  546345 cri.go:89] found id: ""
	I1202 22:30:10.815748  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.815757  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:10.815767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:10.815844  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:10.840563  546345 cri.go:89] found id: ""
	I1202 22:30:10.840586  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.840594  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:10.840601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:10.840667  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:10.869275  546345 cri.go:89] found id: ""
	I1202 22:30:10.869349  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.869372  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:10.869391  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:10.869478  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:10.894450  546345 cri.go:89] found id: ""
	I1202 22:30:10.894477  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.894486  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:10.894493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:10.894572  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:10.919134  546345 cri.go:89] found id: ""
	I1202 22:30:10.919161  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.919170  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:10.919177  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:10.919238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:10.944009  546345 cri.go:89] found id: ""
	I1202 22:30:10.944035  546345 logs.go:282] 0 containers: []
	W1202 22:30:10.944044  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:10.944053  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:10.944066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:11.000144  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:11.000183  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:11.018501  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:11.018532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:11.149770  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:11.141251    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.142054    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.143941    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.144500    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:11.146190    3918 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:11.149837  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:11.149860  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:11.175018  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:11.175055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.702967  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:13.713482  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:13.713560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:13.739844  546345 cri.go:89] found id: ""
	I1202 22:30:13.739867  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.739876  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:13.739886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:13.739943  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:13.765162  546345 cri.go:89] found id: ""
	I1202 22:30:13.765184  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.765192  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:13.765199  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:13.765256  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:13.790968  546345 cri.go:89] found id: ""
	I1202 22:30:13.790991  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.790999  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:13.791005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:13.791069  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:13.816755  546345 cri.go:89] found id: ""
	I1202 22:30:13.816791  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.816799  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:13.816806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:13.816869  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:13.843444  546345 cri.go:89] found id: ""
	I1202 22:30:13.843469  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.843477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:13.843484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:13.843551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:13.868489  546345 cri.go:89] found id: ""
	I1202 22:30:13.868514  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.868523  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:13.868530  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:13.868608  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:13.893527  546345 cri.go:89] found id: ""
	I1202 22:30:13.893552  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.893560  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:13.893567  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:13.893624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:13.919358  546345 cri.go:89] found id: ""
	I1202 22:30:13.919382  546345 logs.go:282] 0 containers: []
	W1202 22:30:13.919390  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:13.919400  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:13.919411  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:13.946818  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:13.946846  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:14.004198  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:14.004294  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:14.021120  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:14.021157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:14.145347  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:14.136103    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138065    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.138857    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.140566    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:14.141159    4043 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:14.145369  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:14.145382  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:16.669687  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:16.680323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:16.680426  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:16.705295  546345 cri.go:89] found id: ""
	I1202 22:30:16.705320  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.705329  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:16.705335  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:16.705394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:16.729538  546345 cri.go:89] found id: ""
	I1202 22:30:16.729633  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.729648  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:16.729682  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:16.729766  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:16.754022  546345 cri.go:89] found id: ""
	I1202 22:30:16.754045  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.754053  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:16.754059  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:16.754119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:16.780138  546345 cri.go:89] found id: ""
	I1202 22:30:16.780163  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.780171  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:16.780178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:16.780237  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:16.805096  546345 cri.go:89] found id: ""
	I1202 22:30:16.805123  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.805134  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:16.805141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:16.805201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:16.830436  546345 cri.go:89] found id: ""
	I1202 22:30:16.830461  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.830470  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:16.830477  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:16.830537  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:16.859101  546345 cri.go:89] found id: ""
	I1202 22:30:16.859126  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.859135  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:16.859142  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:16.859201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:16.884001  546345 cri.go:89] found id: ""
	I1202 22:30:16.884025  546345 logs.go:282] 0 containers: []
	W1202 22:30:16.884033  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:16.884043  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:16.884054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:16.919216  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:16.919242  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:16.974540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:16.974574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:16.990333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:16.990361  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:17.096330  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:17.076545    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.087821    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.088549    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090292    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:17.090828    4155 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:17.096351  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:17.096363  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:19.641119  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:19.651302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:19.651372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:19.675892  546345 cri.go:89] found id: ""
	I1202 22:30:19.675920  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.675929  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:19.675935  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:19.675993  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:19.700442  546345 cri.go:89] found id: ""
	I1202 22:30:19.700472  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.700480  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:19.700487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:19.700545  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:19.724905  546345 cri.go:89] found id: ""
	I1202 22:30:19.724933  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.724941  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:19.724948  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:19.725008  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:19.749042  546345 cri.go:89] found id: ""
	I1202 22:30:19.749064  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.749072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:19.749079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:19.749142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:19.772319  546345 cri.go:89] found id: ""
	I1202 22:30:19.772346  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.772354  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:19.772361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:19.772423  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:19.796590  546345 cri.go:89] found id: ""
	I1202 22:30:19.796661  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.796685  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:19.796706  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:19.796791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:19.820897  546345 cri.go:89] found id: ""
	I1202 22:30:19.820971  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.820994  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:19.821013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:19.821097  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:19.845057  546345 cri.go:89] found id: ""
	I1202 22:30:19.845127  546345 logs.go:282] 0 containers: []
	W1202 22:30:19.845151  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:19.845173  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:19.845210  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:19.901157  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:19.901190  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:19.916681  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:19.916709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:19.978835  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:19.970731    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.971143    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.973694    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.974147    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:19.975596    4257 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:19.978855  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:19.978868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:20.003532  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:20.003576  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:22.540194  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:22.550669  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:22.550752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:22.575137  546345 cri.go:89] found id: ""
	I1202 22:30:22.575162  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.575179  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:22.575186  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:22.575246  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:22.600172  546345 cri.go:89] found id: ""
	I1202 22:30:22.600199  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.600208  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:22.600214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:22.600280  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:22.627626  546345 cri.go:89] found id: ""
	I1202 22:30:22.627652  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.627661  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:22.627667  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:22.627727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:22.652380  546345 cri.go:89] found id: ""
	I1202 22:30:22.652407  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.652416  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:22.652422  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:22.652483  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:22.679899  546345 cri.go:89] found id: ""
	I1202 22:30:22.679924  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.679933  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:22.679939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:22.679999  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:22.704508  546345 cri.go:89] found id: ""
	I1202 22:30:22.704533  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.704542  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:22.704548  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:22.704623  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:22.729344  546345 cri.go:89] found id: ""
	I1202 22:30:22.729372  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.729380  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:22.729387  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:22.729451  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:22.753872  546345 cri.go:89] found id: ""
	I1202 22:30:22.753899  546345 logs.go:282] 0 containers: []
	W1202 22:30:22.753908  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:22.753918  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:22.753929  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:22.810619  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:22.810654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:22.826861  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:22.826887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:22.891768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:22.882105    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.883827    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.884903    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.886521    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:22.887080    4372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:22.891788  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:22.891801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:22.915527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:22.915563  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.443424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:25.454070  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:25.454140  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:25.477867  546345 cri.go:89] found id: ""
	I1202 22:30:25.477888  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.477896  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:25.477902  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:25.477961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:25.503405  546345 cri.go:89] found id: ""
	I1202 22:30:25.503440  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.503449  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:25.503456  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:25.503548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:25.528678  546345 cri.go:89] found id: ""
	I1202 22:30:25.528703  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.528711  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:25.528718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:25.528784  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:25.555479  546345 cri.go:89] found id: ""
	I1202 22:30:25.555505  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.555513  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:25.555520  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:25.555587  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:25.588375  546345 cri.go:89] found id: ""
	I1202 22:30:25.588398  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.588408  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:25.588415  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:25.588475  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:25.613403  546345 cri.go:89] found id: ""
	I1202 22:30:25.613488  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.613511  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:25.613532  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:25.613627  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:25.644249  546345 cri.go:89] found id: ""
	I1202 22:30:25.644273  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.644282  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:25.644289  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:25.644348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:25.669360  546345 cri.go:89] found id: ""
	I1202 22:30:25.669385  546345 logs.go:282] 0 containers: []
	W1202 22:30:25.669394  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:25.669432  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:25.669448  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:25.701067  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:25.701095  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:25.755359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:25.755393  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:25.771118  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:25.771147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:25.830809  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:25.823565    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.824061    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.825693    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.826148    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:25.827594    4498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:25.830832  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:25.830845  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.355998  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:28.366515  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:28.366588  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:28.391593  546345 cri.go:89] found id: ""
	I1202 22:30:28.391618  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.391627  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:28.391634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:28.391694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:28.420025  546345 cri.go:89] found id: ""
	I1202 22:30:28.420051  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.420060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:28.420073  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:28.420137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:28.444623  546345 cri.go:89] found id: ""
	I1202 22:30:28.444647  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.444655  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:28.444662  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:28.444726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:28.469992  546345 cri.go:89] found id: ""
	I1202 22:30:28.470015  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.470024  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:28.470030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:28.470089  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:28.495503  546345 cri.go:89] found id: ""
	I1202 22:30:28.495580  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.495602  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:28.495616  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:28.495687  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:28.520105  546345 cri.go:89] found id: ""
	I1202 22:30:28.520130  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.520139  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:28.520145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:28.520207  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:28.547412  546345 cri.go:89] found id: ""
	I1202 22:30:28.547444  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.547454  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:28.547460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:28.547522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:28.572324  546345 cri.go:89] found id: ""
	I1202 22:30:28.572349  546345 logs.go:282] 0 containers: []
	W1202 22:30:28.572358  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:28.572367  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:28.572379  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:28.587929  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:28.587952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:28.651756  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:28.642887    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.643983    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.645771    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.646308    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:28.648089    4599 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:28.651790  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:28.651803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:28.676386  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:28.676421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:28.708051  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:28.708079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.265370  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:31.275659  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:31.275728  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:31.335888  546345 cri.go:89] found id: ""
	I1202 22:30:31.335928  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.335956  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:31.335970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:31.336049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:31.386854  546345 cri.go:89] found id: ""
	I1202 22:30:31.386880  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.386888  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:31.386895  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:31.386979  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:31.410707  546345 cri.go:89] found id: ""
	I1202 22:30:31.410731  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.410739  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:31.410746  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:31.410804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:31.439172  546345 cri.go:89] found id: ""
	I1202 22:30:31.439239  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.439263  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:31.439276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:31.439355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:31.467199  546345 cri.go:89] found id: ""
	I1202 22:30:31.467277  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.467293  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:31.467301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:31.467390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:31.495081  546345 cri.go:89] found id: ""
	I1202 22:30:31.495155  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.495178  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:31.495193  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:31.495270  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:31.518280  546345 cri.go:89] found id: ""
	I1202 22:30:31.518306  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.518315  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:31.518323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:31.518400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:31.543715  546345 cri.go:89] found id: ""
	I1202 22:30:31.543757  546345 logs.go:282] 0 containers: []
	W1202 22:30:31.543793  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:31.543809  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:31.543821  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:31.601359  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:31.601392  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:31.617291  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:31.617323  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:31.682689  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:31.674005    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.674679    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.676468    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.677142    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:31.678841    4714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:31.682713  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:31.682727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:31.706626  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:31.706661  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:34.235905  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:34.246438  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:34.246560  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:34.271279  546345 cri.go:89] found id: ""
	I1202 22:30:34.271350  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.271365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:34.271374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:34.271434  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:34.303460  546345 cri.go:89] found id: ""
	I1202 22:30:34.303498  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.303507  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:34.303513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:34.303635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:34.355759  546345 cri.go:89] found id: ""
	I1202 22:30:34.355786  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.355795  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:34.355801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:34.355908  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:34.402466  546345 cri.go:89] found id: ""
	I1202 22:30:34.402553  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.402572  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:34.402580  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:34.402654  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:34.431909  546345 cri.go:89] found id: ""
	I1202 22:30:34.431932  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.431941  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:34.431947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:34.432004  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:34.455451  546345 cri.go:89] found id: ""
	I1202 22:30:34.455476  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.455484  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:34.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:34.455632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:34.478771  546345 cri.go:89] found id: ""
	I1202 22:30:34.478797  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.478805  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:34.478812  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:34.478904  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:34.502377  546345 cri.go:89] found id: ""
	I1202 22:30:34.502452  546345 logs.go:282] 0 containers: []
	W1202 22:30:34.502468  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:34.502479  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:34.502490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:34.559881  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:34.559925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:34.576755  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:34.576785  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:34.640203  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:34.633348    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.633906    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635346    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.635740    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:34.637154    4826 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:34.640223  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:34.640236  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:34.664331  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:34.664368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:37.198596  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:37.208910  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:37.208981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:37.233321  546345 cri.go:89] found id: ""
	I1202 22:30:37.233346  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.233354  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:37.233361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:37.233419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:37.259307  546345 cri.go:89] found id: ""
	I1202 22:30:37.259331  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.259340  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:37.259346  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:37.259404  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:37.282333  546345 cri.go:89] found id: ""
	I1202 22:30:37.282358  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.282367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:37.282373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:37.282430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:37.351993  546345 cri.go:89] found id: ""
	I1202 22:30:37.352018  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.352027  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:37.352034  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:37.352124  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:37.398805  546345 cri.go:89] found id: ""
	I1202 22:30:37.398829  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.398840  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:37.398847  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:37.398912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:37.422987  546345 cri.go:89] found id: ""
	I1202 22:30:37.423010  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.423019  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:37.423026  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:37.423100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:37.447502  546345 cri.go:89] found id: ""
	I1202 22:30:37.447528  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.447537  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:37.447544  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:37.447630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:37.471899  546345 cri.go:89] found id: ""
	I1202 22:30:37.471934  546345 logs.go:282] 0 containers: []
	W1202 22:30:37.471943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:37.471952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:37.471963  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:37.528313  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:37.528350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:37.544433  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:37.544464  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:37.611970  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:37.603634    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.604306    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606167    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.606744    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:37.608686    4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:37.611994  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:37.612007  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:37.636937  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:37.636971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:40.165587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:40.177235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:40.177323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:40.205543  546345 cri.go:89] found id: ""
	I1202 22:30:40.205568  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.205576  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:40.205583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:40.205644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:40.232642  546345 cri.go:89] found id: ""
	I1202 22:30:40.232668  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.232677  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:40.232684  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:40.232746  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:40.259447  546345 cri.go:89] found id: ""
	I1202 22:30:40.259482  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.259496  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:40.259503  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:40.259591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:40.297166  546345 cri.go:89] found id: ""
	I1202 22:30:40.297190  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.297198  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:40.297205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:40.297268  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:40.337983  546345 cri.go:89] found id: ""
	I1202 22:30:40.338005  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.338014  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:40.338020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:40.338079  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:40.380237  546345 cri.go:89] found id: ""
	I1202 22:30:40.380266  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.380274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:40.380282  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:40.380343  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:40.412498  546345 cri.go:89] found id: ""
	I1202 22:30:40.412563  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.412572  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:40.412579  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:40.412637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:40.441910  546345 cri.go:89] found id: ""
	I1202 22:30:40.441934  546345 logs.go:282] 0 containers: []
	W1202 22:30:40.441943  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:40.441952  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:40.441969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:40.496209  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:40.496245  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:40.512922  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:40.512953  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:40.580850  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:40.572953    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.573782    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575424    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.575716    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:40.577152    5053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:40.580875  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:40.580887  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:40.605967  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:40.606001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:43.139166  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:43.149443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:43.149516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:43.177064  546345 cri.go:89] found id: ""
	I1202 22:30:43.177091  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.177099  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:43.177106  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:43.177164  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:43.201811  546345 cri.go:89] found id: ""
	I1202 22:30:43.201837  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.201845  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:43.201852  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:43.201912  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:43.225492  546345 cri.go:89] found id: ""
	I1202 22:30:43.225520  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.225529  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:43.225536  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:43.225594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:43.249036  546345 cri.go:89] found id: ""
	I1202 22:30:43.249064  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.249072  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:43.249079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:43.249139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:43.277251  546345 cri.go:89] found id: ""
	I1202 22:30:43.277276  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.277285  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:43.277297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:43.277354  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:43.314360  546345 cri.go:89] found id: ""
	I1202 22:30:43.314396  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.314406  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:43.314413  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:43.314488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:43.374629  546345 cri.go:89] found id: ""
	I1202 22:30:43.374657  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.374666  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:43.374672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:43.374730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:43.416766  546345 cri.go:89] found id: ""
	I1202 22:30:43.416794  546345 logs.go:282] 0 containers: []
	W1202 22:30:43.416803  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:43.416812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:43.416823  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:43.471606  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:43.471644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:43.487334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:43.487362  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:43.553915  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:43.545764    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.546550    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548203    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.548575    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:43.550132    5165 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:43.553939  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:43.553952  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:43.579222  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:43.579258  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.107248  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:46.118081  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:46.118150  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:46.142754  546345 cri.go:89] found id: ""
	I1202 22:30:46.142781  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.142789  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:46.142796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:46.142861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:46.169825  546345 cri.go:89] found id: ""
	I1202 22:30:46.169849  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.169858  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:46.169864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:46.169929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:46.196691  546345 cri.go:89] found id: ""
	I1202 22:30:46.196719  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.196728  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:46.196734  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:46.196796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:46.221449  546345 cri.go:89] found id: ""
	I1202 22:30:46.221476  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.221485  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:46.221492  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:46.221552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:46.246043  546345 cri.go:89] found id: ""
	I1202 22:30:46.246108  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.246131  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:46.246145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:46.246227  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:46.271663  546345 cri.go:89] found id: ""
	I1202 22:30:46.271687  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.271695  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:46.271702  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:46.271760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:46.315379  546345 cri.go:89] found id: ""
	I1202 22:30:46.315404  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.315413  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:46.315420  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:46.315477  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:46.359855  546345 cri.go:89] found id: ""
	I1202 22:30:46.359883  546345 logs.go:282] 0 containers: []
	W1202 22:30:46.359893  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:46.359903  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:46.359915  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:46.377127  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:46.377158  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:46.445559  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:46.437310    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.438174    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.439869    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.440445    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:46.442197    5276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:46.445583  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:46.445605  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:46.473713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:46.473754  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:46.501189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:46.501221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.058128  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:49.068126  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:49.068198  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:49.092263  546345 cri.go:89] found id: ""
	I1202 22:30:49.092288  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.092297  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:49.092303  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:49.092360  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:49.115983  546345 cri.go:89] found id: ""
	I1202 22:30:49.116008  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.116017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:49.116024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:49.116081  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:49.139874  546345 cri.go:89] found id: ""
	I1202 22:30:49.139899  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.139908  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:49.139915  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:49.139971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:49.164359  546345 cri.go:89] found id: ""
	I1202 22:30:49.164388  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.164397  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:49.164404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:49.164485  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:49.189339  546345 cri.go:89] found id: ""
	I1202 22:30:49.189365  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.189374  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:49.189383  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:49.189440  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:49.213800  546345 cri.go:89] found id: ""
	I1202 22:30:49.213826  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.213835  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:49.213842  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:49.213899  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:49.238436  546345 cri.go:89] found id: ""
	I1202 22:30:49.238463  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.238473  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:49.238480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:49.238540  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:49.267385  546345 cri.go:89] found id: ""
	I1202 22:30:49.267459  546345 logs.go:282] 0 containers: []
	W1202 22:30:49.267483  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:49.267500  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:49.267523  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:49.332624  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:49.332664  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:49.365875  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:49.365902  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:49.443796  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:49.436340    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.436862    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438439    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.438890    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:49.440534    5394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:49.443869  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:49.443888  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:49.467900  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:49.467933  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:51.996457  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:52.009596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:52.009694  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:52.037146  546345 cri.go:89] found id: ""
	I1202 22:30:52.037172  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.037190  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:52.037197  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:52.037257  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:52.063683  546345 cri.go:89] found id: ""
	I1202 22:30:52.063708  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.063717  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:52.063724  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:52.063786  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:52.089573  546345 cri.go:89] found id: ""
	I1202 22:30:52.089598  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.089606  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:52.089613  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:52.089704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:52.114785  546345 cri.go:89] found id: ""
	I1202 22:30:52.114810  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.114819  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:52.114826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:52.114884  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:52.137456  546345 cri.go:89] found id: ""
	I1202 22:30:52.137479  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.137489  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:52.137495  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:52.137552  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:52.161392  546345 cri.go:89] found id: ""
	I1202 22:30:52.161418  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.161426  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:52.161433  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:52.161544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:52.186619  546345 cri.go:89] found id: ""
	I1202 22:30:52.186640  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.186648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:52.186658  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:52.186717  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:52.211047  546345 cri.go:89] found id: ""
	I1202 22:30:52.211069  546345 logs.go:282] 0 containers: []
	W1202 22:30:52.211077  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:52.211086  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:52.211097  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:52.240049  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:52.240079  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:52.297727  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:52.297804  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:52.326988  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:52.327061  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:52.421545  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:52.413695    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.414266    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.415896    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.416344    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:52.418034    5517 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:52.421566  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:52.421578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:54.945402  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:54.955618  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:54.955688  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:54.981106  546345 cri.go:89] found id: ""
	I1202 22:30:54.981132  546345 logs.go:282] 0 containers: []
	W1202 22:30:54.981140  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:54.981147  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:54.981210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:55.017766  546345 cri.go:89] found id: ""
	I1202 22:30:55.017789  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.017798  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:55.017805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:55.017886  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:55.051218  546345 cri.go:89] found id: ""
	I1202 22:30:55.051293  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.051320  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:55.051342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:55.051449  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:55.092842  546345 cri.go:89] found id: ""
	I1202 22:30:55.092869  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.092879  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:55.092886  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:55.092955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:55.131432  546345 cri.go:89] found id: ""
	I1202 22:30:55.131517  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.131546  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:55.131570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:55.131702  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:55.166611  546345 cri.go:89] found id: ""
	I1202 22:30:55.166639  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.166653  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:55.166661  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:55.166737  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:55.197157  546345 cri.go:89] found id: ""
	I1202 22:30:55.197183  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.197199  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:55.197206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:55.197277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:55.229014  546345 cri.go:89] found id: ""
	I1202 22:30:55.229045  546345 logs.go:282] 0 containers: []
	W1202 22:30:55.229053  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:55.229062  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:55.229074  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:55.284839  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:55.284877  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:55.312855  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:55.312884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:55.414558  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:55.406788    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.407346    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.408947    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.409345    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:55.410922    5620 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:55.414580  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:55.414595  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:55.439435  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:55.439472  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:30:57.966587  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:30:57.977332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:30:57.977425  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:30:58.009109  546345 cri.go:89] found id: ""
	I1202 22:30:58.009146  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.009155  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:30:58.009162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:30:58.009277  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:30:58.034957  546345 cri.go:89] found id: ""
	I1202 22:30:58.034980  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.034989  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:30:58.034996  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:30:58.035075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:30:58.059651  546345 cri.go:89] found id: ""
	I1202 22:30:58.059677  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.059687  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:30:58.059694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:30:58.059754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:30:58.092476  546345 cri.go:89] found id: ""
	I1202 22:30:58.092510  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.092520  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:30:58.092527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:30:58.092601  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:30:58.116505  546345 cri.go:89] found id: ""
	I1202 22:30:58.116531  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.116539  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:30:58.116545  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:30:58.116617  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:30:58.141152  546345 cri.go:89] found id: ""
	I1202 22:30:58.141180  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.141189  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:30:58.141196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:30:58.141252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:30:58.167272  546345 cri.go:89] found id: ""
	I1202 22:30:58.167294  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.167302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:30:58.167308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:30:58.167365  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:30:58.193236  546345 cri.go:89] found id: ""
	I1202 22:30:58.193311  546345 logs.go:282] 0 containers: []
	W1202 22:30:58.193334  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:30:58.193351  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:30:58.193374  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:30:58.248292  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:30:58.248365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:30:58.263580  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:30:58.263610  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:30:58.374750  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:30:58.366839    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.367545    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369133    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.369607    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:30:58.371475    5730 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:30:58.374772  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:30:58.374784  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:30:58.401522  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:30:58.401558  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:00.931781  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:00.941965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:00.942042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:00.966926  546345 cri.go:89] found id: ""
	I1202 22:31:00.966950  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.966958  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:00.966965  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:00.967026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:00.991438  546345 cri.go:89] found id: ""
	I1202 22:31:00.991463  546345 logs.go:282] 0 containers: []
	W1202 22:31:00.991472  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:00.991479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:00.991538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:01.019713  546345 cri.go:89] found id: ""
	I1202 22:31:01.019737  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.019745  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:01.019752  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:01.019809  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:01.044143  546345 cri.go:89] found id: ""
	I1202 22:31:01.044166  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.044174  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:01.044181  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:01.044240  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:01.069071  546345 cri.go:89] found id: ""
	I1202 22:31:01.069094  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.069102  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:01.069109  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:01.069170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:01.097613  546345 cri.go:89] found id: ""
	I1202 22:31:01.097639  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.097648  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:01.097688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:01.097754  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:01.124227  546345 cri.go:89] found id: ""
	I1202 22:31:01.124251  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.124260  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:01.124267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:01.124329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:01.150457  546345 cri.go:89] found id: ""
	I1202 22:31:01.150483  546345 logs.go:282] 0 containers: []
	W1202 22:31:01.150491  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:01.150501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:01.150512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:01.175721  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:01.175753  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:01.204876  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:01.204907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:01.261532  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:01.261567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:01.277504  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:01.277531  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:01.369721  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:01.355410    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360001    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.360744    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364140    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:01.364693    5854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:03.870061  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:03.880451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:03.880522  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:03.903663  546345 cri.go:89] found id: ""
	I1202 22:31:03.903688  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.903698  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:03.903704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:03.903767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:03.927883  546345 cri.go:89] found id: ""
	I1202 22:31:03.927904  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.927913  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:03.927920  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:03.927982  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:03.952301  546345 cri.go:89] found id: ""
	I1202 22:31:03.952324  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.952332  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:03.952339  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:03.952397  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:03.977367  546345 cri.go:89] found id: ""
	I1202 22:31:03.977390  546345 logs.go:282] 0 containers: []
	W1202 22:31:03.977399  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:03.977406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:03.977465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:04.003308  546345 cri.go:89] found id: ""
	I1202 22:31:04.003336  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.003347  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:04.003361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:04.003438  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:04.030694  546345 cri.go:89] found id: ""
	I1202 22:31:04.030718  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.030731  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:04.030738  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:04.030812  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:04.056404  546345 cri.go:89] found id: ""
	I1202 22:31:04.056430  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.056439  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:04.056446  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:04.056506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:04.081740  546345 cri.go:89] found id: ""
	I1202 22:31:04.081762  546345 logs.go:282] 0 containers: []
	W1202 22:31:04.081770  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:04.081779  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:04.081792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:04.109259  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:04.109285  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:04.165104  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:04.165137  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:04.181694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:04.181725  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:04.241465  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:04.234394    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.234783    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236525    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.236860    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:04.238270    5966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:04.241493  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:04.241506  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:06.766561  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:06.777372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:06.777445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:06.807209  546345 cri.go:89] found id: ""
	I1202 22:31:06.807235  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.807244  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:06.807251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:06.807356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:06.833401  546345 cri.go:89] found id: ""
	I1202 22:31:06.833424  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.833433  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:06.833439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:06.833497  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:06.858407  546345 cri.go:89] found id: ""
	I1202 22:31:06.858434  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.858442  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:06.858449  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:06.858509  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:06.884341  546345 cri.go:89] found id: ""
	I1202 22:31:06.884367  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.884375  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:06.884382  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:06.884445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:06.911764  546345 cri.go:89] found id: ""
	I1202 22:31:06.911787  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.911796  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:06.911802  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:06.911861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:06.940179  546345 cri.go:89] found id: ""
	I1202 22:31:06.940204  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.940217  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:06.940225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:06.940289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:06.965277  546345 cri.go:89] found id: ""
	I1202 22:31:06.965304  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.965313  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:06.965320  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:06.965390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:06.991270  546345 cri.go:89] found id: ""
	I1202 22:31:06.991294  546345 logs.go:282] 0 containers: []
	W1202 22:31:06.991303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:06.991313  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:06.991326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:07.060741  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:07.051593    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.052288    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.053853    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.054275    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:07.057516    6065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:07.060762  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:07.060778  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:07.085921  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:07.085970  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:07.113268  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:07.113298  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:07.169055  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:07.169092  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:09.686487  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:09.697143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:09.697217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:09.722726  546345 cri.go:89] found id: ""
	I1202 22:31:09.722749  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.722760  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:09.722767  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:09.722826  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:09.748226  546345 cri.go:89] found id: ""
	I1202 22:31:09.748251  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.748260  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:09.748267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:09.748327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:09.774010  546345 cri.go:89] found id: ""
	I1202 22:31:09.774035  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.774043  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:09.774050  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:09.774109  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:09.800227  546345 cri.go:89] found id: ""
	I1202 22:31:09.800250  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.800259  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:09.800266  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:09.800328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:09.828744  546345 cri.go:89] found id: ""
	I1202 22:31:09.828768  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.828777  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:09.828784  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:09.828843  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:09.853554  546345 cri.go:89] found id: ""
	I1202 22:31:09.853577  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.853586  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:09.853593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:09.853672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:09.879248  546345 cri.go:89] found id: ""
	I1202 22:31:09.879271  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.879279  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:09.879285  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:09.879350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:09.908338  546345 cri.go:89] found id: ""
	I1202 22:31:09.908364  546345 logs.go:282] 0 containers: []
	W1202 22:31:09.908373  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:09.908383  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:09.908394  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:09.936944  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:09.936974  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:09.993598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:09.993644  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:10.010732  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:10.010766  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:10.084652  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:10.073833    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.074265    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.077620    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.078616    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:10.080339    6195 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:10.084677  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:10.084692  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:12.613817  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:12.624680  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:12.624765  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:12.651202  546345 cri.go:89] found id: ""
	I1202 22:31:12.651227  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.651236  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:12.651243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:12.651301  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:12.676106  546345 cri.go:89] found id: ""
	I1202 22:31:12.676130  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.676138  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:12.676145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:12.676202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:12.700680  546345 cri.go:89] found id: ""
	I1202 22:31:12.700706  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.700716  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:12.700723  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:12.700787  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:12.726023  546345 cri.go:89] found id: ""
	I1202 22:31:12.726049  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.726059  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:12.726066  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:12.726126  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:12.750927  546345 cri.go:89] found id: ""
	I1202 22:31:12.750951  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.750959  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:12.750966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:12.751026  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:12.777535  546345 cri.go:89] found id: ""
	I1202 22:31:12.777562  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.777570  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:12.777577  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:12.777634  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:12.801546  546345 cri.go:89] found id: ""
	I1202 22:31:12.801572  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.801581  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:12.801588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:12.801646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:12.829909  546345 cri.go:89] found id: ""
	I1202 22:31:12.829932  546345 logs.go:282] 0 containers: []
	W1202 22:31:12.829941  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:12.829950  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:12.829961  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:12.859869  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:12.859896  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:12.914732  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:12.914767  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:12.930844  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:12.930875  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:12.995842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:12.988692    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.989211    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.990739    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.991189    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:12.992650    6310 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:12.995865  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:12.995879  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.522875  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:15.533513  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:15.533591  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:15.569400  546345 cri.go:89] found id: ""
	I1202 22:31:15.569424  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.569433  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:15.569439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:15.569496  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:15.628130  546345 cri.go:89] found id: ""
	I1202 22:31:15.628152  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.628161  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:15.628167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:15.628228  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:15.653054  546345 cri.go:89] found id: ""
	I1202 22:31:15.653076  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.653085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:15.653092  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:15.653149  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:15.678257  546345 cri.go:89] found id: ""
	I1202 22:31:15.678281  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.678290  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:15.678296  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:15.678353  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:15.702830  546345 cri.go:89] found id: ""
	I1202 22:31:15.702856  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.702864  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:15.702871  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:15.702936  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:15.728236  546345 cri.go:89] found id: ""
	I1202 22:31:15.728261  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.728270  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:15.728276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:15.728336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:15.753646  546345 cri.go:89] found id: ""
	I1202 22:31:15.753694  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.753703  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:15.753710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:15.753772  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:15.778069  546345 cri.go:89] found id: ""
	I1202 22:31:15.778092  546345 logs.go:282] 0 containers: []
	W1202 22:31:15.778101  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:15.778110  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:15.778121  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:15.834182  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:15.834217  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:15.850533  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:15.850572  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:15.911589  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:15.904443    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.904979    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906513    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.906995    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:15.908448    6415 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:15.911609  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:15.911621  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:15.936945  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:15.936977  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.470112  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:18.480648  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:18.480727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:18.508083  546345 cri.go:89] found id: ""
	I1202 22:31:18.508109  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.508117  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:18.508124  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:18.508252  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:18.533123  546345 cri.go:89] found id: ""
	I1202 22:31:18.533149  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.533164  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:18.533172  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:18.533245  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:18.586767  546345 cri.go:89] found id: ""
	I1202 22:31:18.586791  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.586800  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:18.586806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:18.586866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:18.626205  546345 cri.go:89] found id: ""
	I1202 22:31:18.626227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.626236  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:18.626242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:18.626299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:18.653977  546345 cri.go:89] found id: ""
	I1202 22:31:18.653998  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.654007  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:18.654013  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:18.654074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:18.679194  546345 cri.go:89] found id: ""
	I1202 22:31:18.679227  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.679237  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:18.679244  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:18.679305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:18.704215  546345 cri.go:89] found id: ""
	I1202 22:31:18.704280  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.704305  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:18.704326  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:18.704411  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:18.729467  546345 cri.go:89] found id: ""
	I1202 22:31:18.729536  546345 logs.go:282] 0 containers: []
	W1202 22:31:18.729560  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:18.729583  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:18.729624  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:18.745333  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:18.745406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:18.810842  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:18.802788    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.803411    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805114    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.805719    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:18.807226    6528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:18.810886  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:18.810899  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:18.836014  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:18.836050  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:18.864189  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:18.864230  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.420147  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:21.430404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:21.430516  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:21.454558  546345 cri.go:89] found id: ""
	I1202 22:31:21.454583  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.454592  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:21.454599  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:21.454658  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:21.478328  546345 cri.go:89] found id: ""
	I1202 22:31:21.478360  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.478369  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:21.478377  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:21.478445  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:21.502704  546345 cri.go:89] found id: ""
	I1202 22:31:21.502729  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.502737  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:21.502744  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:21.502805  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:21.528175  546345 cri.go:89] found id: ""
	I1202 22:31:21.528201  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.528209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:21.528216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:21.528278  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:21.616594  546345 cri.go:89] found id: ""
	I1202 22:31:21.616622  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.616632  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:21.616638  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:21.616697  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:21.645131  546345 cri.go:89] found id: ""
	I1202 22:31:21.645160  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.645168  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:21.645178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:21.645238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:21.671523  546345 cri.go:89] found id: ""
	I1202 22:31:21.671545  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.671553  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:21.671564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:21.671624  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:21.695173  546345 cri.go:89] found id: ""
	I1202 22:31:21.695195  546345 logs.go:282] 0 containers: []
	W1202 22:31:21.695203  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:21.695212  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:21.695222  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:21.719757  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:21.719792  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:21.749635  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:21.749681  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:21.808026  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:21.808062  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:21.823780  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:21.823809  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:21.884457  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:21.877494    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.878140    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879593    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.879994    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:21.881385    6656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.384744  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:24.394799  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:24.394871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:24.420708  546345 cri.go:89] found id: ""
	I1202 22:31:24.420731  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.420740  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:24.420747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:24.420804  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:24.444913  546345 cri.go:89] found id: ""
	I1202 22:31:24.444938  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.444947  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:24.444953  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:24.445011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:24.468474  546345 cri.go:89] found id: ""
	I1202 22:31:24.468562  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.468586  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:24.468619  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:24.468712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:24.492364  546345 cri.go:89] found id: ""
	I1202 22:31:24.492435  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.492459  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:24.492479  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:24.492570  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:24.517358  546345 cri.go:89] found id: ""
	I1202 22:31:24.517434  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.517473  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:24.517498  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:24.517589  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:24.556717  546345 cri.go:89] found id: ""
	I1202 22:31:24.556800  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.556829  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:24.556870  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:24.556990  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:24.641499  546345 cri.go:89] found id: ""
	I1202 22:31:24.641533  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.641542  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:24.641549  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:24.641704  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:24.665998  546345 cri.go:89] found id: ""
	I1202 22:31:24.666024  546345 logs.go:282] 0 containers: []
	W1202 22:31:24.666032  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:24.666041  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:24.666053  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:24.720801  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:24.720835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:24.736228  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:24.736255  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:24.802911  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:24.795538    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.796038    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.797728    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.798172    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:24.799727    6759 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:24.802934  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:24.802948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:24.826675  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:24.826710  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:27.352424  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:27.363728  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:27.363800  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:27.388330  546345 cri.go:89] found id: ""
	I1202 22:31:27.388356  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.388365  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:27.388372  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:27.388430  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:27.412561  546345 cri.go:89] found id: ""
	I1202 22:31:27.412589  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.412598  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:27.412605  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:27.412664  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:27.436953  546345 cri.go:89] found id: ""
	I1202 22:31:27.436982  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.436991  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:27.436997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:27.437057  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:27.461746  546345 cri.go:89] found id: ""
	I1202 22:31:27.461775  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.461783  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:27.461790  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:27.461847  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:27.489561  546345 cri.go:89] found id: ""
	I1202 22:31:27.489598  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.489607  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:27.489614  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:27.489708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:27.517814  546345 cri.go:89] found id: ""
	I1202 22:31:27.517835  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.517844  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:27.517851  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:27.517909  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:27.545611  546345 cri.go:89] found id: ""
	I1202 22:31:27.545711  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.545734  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:27.545754  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:27.545839  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:27.603444  546345 cri.go:89] found id: ""
	I1202 22:31:27.603466  546345 logs.go:282] 0 containers: []
	W1202 22:31:27.603474  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:27.603484  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:27.603497  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:27.674112  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:27.674149  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:27.690096  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:27.690128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:27.752579  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:27.743812    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.744655    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.746641    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.747382    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:27.749154    6874 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:27.752604  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:27.752617  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:27.777612  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:27.777647  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.305694  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:30.316225  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:30.316348  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:30.339916  546345 cri.go:89] found id: ""
	I1202 22:31:30.339950  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.339959  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:30.339974  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:30.340052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:30.369549  546345 cri.go:89] found id: ""
	I1202 22:31:30.369575  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.369584  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:30.369590  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:30.369677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:30.394634  546345 cri.go:89] found id: ""
	I1202 22:31:30.394711  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.394734  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:30.394749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:30.394830  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:30.419244  546345 cri.go:89] found id: ""
	I1202 22:31:30.419271  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.419279  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:30.419286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:30.419344  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:30.447382  546345 cri.go:89] found id: ""
	I1202 22:31:30.447414  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.447423  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:30.447430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:30.447530  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:30.471131  546345 cri.go:89] found id: ""
	I1202 22:31:30.471155  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.471163  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:30.471170  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:30.471236  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:30.496091  546345 cri.go:89] found id: ""
	I1202 22:31:30.496116  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.496125  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:30.496132  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:30.496209  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:30.520739  546345 cri.go:89] found id: ""
	I1202 22:31:30.520767  546345 logs.go:282] 0 containers: []
	W1202 22:31:30.520775  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:30.520785  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:30.520796  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:30.549966  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:30.550055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:30.602152  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:30.602176  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:30.668135  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:30.668172  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:30.683585  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:30.683653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:30.747838  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:30.740098    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.740709    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742156    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.742602    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:30.744015    6998 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:33.249502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:33.259480  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:33.259551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:33.282766  546345 cri.go:89] found id: ""
	I1202 22:31:33.282791  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.282799  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:33.282806  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:33.282866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:33.308495  546345 cri.go:89] found id: ""
	I1202 22:31:33.308518  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.308533  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:33.308540  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:33.308597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:33.331979  546345 cri.go:89] found id: ""
	I1202 22:31:33.332013  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.332023  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:33.332030  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:33.332100  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:33.356278  546345 cri.go:89] found id: ""
	I1202 22:31:33.356304  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.356313  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:33.356319  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:33.356378  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:33.384857  546345 cri.go:89] found id: ""
	I1202 22:31:33.384885  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.384893  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:33.384900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:33.384959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:33.409699  546345 cri.go:89] found id: ""
	I1202 22:31:33.409727  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.409735  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:33.409742  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:33.409818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:33.433952  546345 cri.go:89] found id: ""
	I1202 22:31:33.433976  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.433984  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:33.433991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:33.434048  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:33.457225  546345 cri.go:89] found id: ""
	I1202 22:31:33.457250  546345 logs.go:282] 0 containers: []
	W1202 22:31:33.457265  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:33.457274  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:33.457286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:33.481072  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:33.481106  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:33.513367  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:33.513402  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:33.575454  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:33.575500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:33.611865  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:33.611895  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:33.687837  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:33.680605    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.681164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.682734    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.683164    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:33.684656    7114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.188106  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:36.198524  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:36.198595  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:36.227262  546345 cri.go:89] found id: ""
	I1202 22:31:36.227286  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.227294  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:36.227301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:36.227364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:36.251229  546345 cri.go:89] found id: ""
	I1202 22:31:36.251254  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.251262  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:36.251269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:36.251328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:36.280094  546345 cri.go:89] found id: ""
	I1202 22:31:36.280118  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.280128  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:36.280135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:36.280192  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:36.303557  546345 cri.go:89] found id: ""
	I1202 22:31:36.303589  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.303598  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:36.303606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:36.303680  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:36.328036  546345 cri.go:89] found id: ""
	I1202 22:31:36.328099  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.328110  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:36.328117  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:36.328210  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:36.352844  546345 cri.go:89] found id: ""
	I1202 22:31:36.352919  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.352942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:36.352963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:36.353076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:36.377059  546345 cri.go:89] found id: ""
	I1202 22:31:36.377123  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.377148  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:36.377169  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:36.377299  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:36.406912  546345 cri.go:89] found id: ""
	I1202 22:31:36.406939  546345 logs.go:282] 0 containers: []
	W1202 22:31:36.406947  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:36.406957  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:36.406969  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:36.462620  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:36.462655  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:36.478602  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:36.478633  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:36.553409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:36.534656    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.535346    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.536921    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.537223    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:36.539840    7213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:36.553440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:36.553453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:36.605527  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:36.605567  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.147765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:39.158330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:39.158399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:39.184185  546345 cri.go:89] found id: ""
	I1202 22:31:39.184211  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.184220  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:39.184227  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:39.184286  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:39.211366  546345 cri.go:89] found id: ""
	I1202 22:31:39.211390  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.211399  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:39.211405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:39.211465  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:39.239810  546345 cri.go:89] found id: ""
	I1202 22:31:39.239836  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.239846  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:39.239853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:39.239914  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:39.264259  546345 cri.go:89] found id: ""
	I1202 22:31:39.264285  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.264294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:39.264300  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:39.264357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:39.288356  546345 cri.go:89] found id: ""
	I1202 22:31:39.288384  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.288394  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:39.288400  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:39.288459  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:39.312721  546345 cri.go:89] found id: ""
	I1202 22:31:39.312745  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.312754  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:39.312760  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:39.312817  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:39.337724  546345 cri.go:89] found id: ""
	I1202 22:31:39.337748  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.337756  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:39.337762  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:39.337821  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:39.362280  546345 cri.go:89] found id: ""
	I1202 22:31:39.362303  546345 logs.go:282] 0 containers: []
	W1202 22:31:39.362311  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:39.362320  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:39.362332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:39.389401  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:39.389425  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:39.449427  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:39.449471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:39.464867  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:39.464897  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:39.527654  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:39.520193    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.521056    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522506    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.522885    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:39.524329    7338 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:39.527675  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:39.527691  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.058126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:42.070220  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:42.070305  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:42.113161  546345 cri.go:89] found id: ""
	I1202 22:31:42.113187  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.113197  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:42.113205  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:42.113279  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:42.151146  546345 cri.go:89] found id: ""
	I1202 22:31:42.151178  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.151188  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:42.151195  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:42.151267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:42.187923  546345 cri.go:89] found id: ""
	I1202 22:31:42.187951  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.187960  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:42.187968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:42.188040  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:42.222980  546345 cri.go:89] found id: ""
	I1202 22:31:42.223003  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.223012  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:42.223020  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:42.223088  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:42.271018  546345 cri.go:89] found id: ""
	I1202 22:31:42.271046  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.271056  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:42.271064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:42.271136  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:42.302817  546345 cri.go:89] found id: ""
	I1202 22:31:42.302893  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.302913  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:42.302929  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:42.303020  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:42.333498  546345 cri.go:89] found id: ""
	I1202 22:31:42.333526  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.333535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:42.333543  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:42.333630  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:42.363457  546345 cri.go:89] found id: ""
	I1202 22:31:42.363485  546345 logs.go:282] 0 containers: []
	W1202 22:31:42.363495  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:42.363505  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:42.363518  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:42.421844  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:42.421883  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:42.439113  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:42.439145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:42.506768  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:42.497962    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.498854    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.500599    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.501068    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:42.502782    7444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:42.506791  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:42.506803  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:42.531455  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:42.531491  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:45.076035  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:45.089323  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:45.089414  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:45.121410  546345 cri.go:89] found id: ""
	I1202 22:31:45.121436  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.121445  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:45.121454  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:45.121523  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:45.158421  546345 cri.go:89] found id: ""
	I1202 22:31:45.158452  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.158461  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:45.158840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:45.158933  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:45.226744  546345 cri.go:89] found id: ""
	I1202 22:31:45.226769  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.226778  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:45.226785  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:45.226855  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:45.277464  546345 cri.go:89] found id: ""
	I1202 22:31:45.277540  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.277560  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:45.277573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:45.277920  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:45.321564  546345 cri.go:89] found id: ""
	I1202 22:31:45.321591  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.321600  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:45.321607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:45.321695  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:45.349203  546345 cri.go:89] found id: ""
	I1202 22:31:45.349228  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.349236  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:45.349243  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:45.349302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:45.377967  546345 cri.go:89] found id: ""
	I1202 22:31:45.377993  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.378001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:45.378009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:45.378068  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:45.404655  546345 cri.go:89] found id: ""
	I1202 22:31:45.404680  546345 logs.go:282] 0 containers: []
	W1202 22:31:45.404689  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:45.404697  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:45.404709  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:45.459390  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:45.459424  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:45.474938  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:45.474964  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:45.551857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:45.534337    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.534849    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536311    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.536755    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:45.538167    7555 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:45.551880  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:45.551893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:45.599545  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:45.599577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.142376  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:48.152835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:48.152910  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:48.176887  546345 cri.go:89] found id: ""
	I1202 22:31:48.176913  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.176921  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:48.176928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:48.176992  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:48.199841  546345 cri.go:89] found id: ""
	I1202 22:31:48.199865  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.199873  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:48.199879  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:48.199937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:48.223323  546345 cri.go:89] found id: ""
	I1202 22:31:48.223346  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.223354  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:48.223361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:48.223419  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:48.246053  546345 cri.go:89] found id: ""
	I1202 22:31:48.246079  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.246088  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:48.246095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:48.246152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:48.269713  546345 cri.go:89] found id: ""
	I1202 22:31:48.269739  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.269748  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:48.269755  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:48.269811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:48.295336  546345 cri.go:89] found id: ""
	I1202 22:31:48.295359  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.295368  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:48.295374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:48.295435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:48.318964  546345 cri.go:89] found id: ""
	I1202 22:31:48.318989  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.319001  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:48.319009  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:48.319114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:48.342776  546345 cri.go:89] found id: ""
	I1202 22:31:48.342803  546345 logs.go:282] 0 containers: []
	W1202 22:31:48.342812  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:48.342821  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:48.342834  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:48.366473  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:48.366507  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:48.397880  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:48.397907  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:48.453030  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:48.453066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:48.468428  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:48.468455  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:48.530252  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:48.521915    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.522761    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.524653    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.525302    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:48.526795    7680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.030539  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:51.041072  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:51.041139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:51.064958  546345 cri.go:89] found id: ""
	I1202 22:31:51.064986  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.064994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:51.065004  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:51.065074  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:51.090247  546345 cri.go:89] found id: ""
	I1202 22:31:51.090275  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.090284  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:51.090290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:51.090356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:51.117187  546345 cri.go:89] found id: ""
	I1202 22:31:51.117224  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.117235  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:51.117242  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:51.117326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:51.143456  546345 cri.go:89] found id: ""
	I1202 22:31:51.143483  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.143492  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:51.143499  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:51.143563  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:51.169463  546345 cri.go:89] found id: ""
	I1202 22:31:51.169542  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.169565  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:51.169587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:51.169719  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:51.195977  546345 cri.go:89] found id: ""
	I1202 22:31:51.196019  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.196028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:51.196035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:51.196105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:51.221006  546345 cri.go:89] found id: ""
	I1202 22:31:51.221030  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.221045  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:51.221051  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:51.221119  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:51.245434  546345 cri.go:89] found id: ""
	I1202 22:31:51.245457  546345 logs.go:282] 0 containers: []
	W1202 22:31:51.245466  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:51.245475  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:51.245486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:51.273171  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:51.273198  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:51.328523  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:51.328562  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:51.344211  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:51.344238  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:51.405812  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:51.397619    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.398419    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.399942    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.400527    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:51.402107    7793 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:51.405843  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:51.405859  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:53.930346  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:53.940572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:53.940646  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:53.968500  546345 cri.go:89] found id: ""
	I1202 22:31:53.968531  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.968540  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:53.968547  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:53.968605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:53.993271  546345 cri.go:89] found id: ""
	I1202 22:31:53.993298  546345 logs.go:282] 0 containers: []
	W1202 22:31:53.993306  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:53.993314  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:53.993372  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:54.020928  546345 cri.go:89] found id: ""
	I1202 22:31:54.020956  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.020965  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:54.020973  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:54.021039  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:54.047236  546345 cri.go:89] found id: ""
	I1202 22:31:54.047260  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.047269  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:54.047276  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:54.047336  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:54.072186  546345 cri.go:89] found id: ""
	I1202 22:31:54.072219  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.072228  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:54.072235  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:54.072310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:54.097358  546345 cri.go:89] found id: ""
	I1202 22:31:54.097390  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.097400  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:54.097407  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:54.097484  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:54.122635  546345 cri.go:89] found id: ""
	I1202 22:31:54.122739  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.122765  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:54.122787  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:54.122881  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:54.147140  546345 cri.go:89] found id: ""
	I1202 22:31:54.147205  546345 logs.go:282] 0 containers: []
	W1202 22:31:54.147228  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:54.147244  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:54.147257  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:54.209277  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:54.202024    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.202800    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204383    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.204707    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:54.206238    7890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:54.209298  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:54.209312  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:54.233525  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:54.233564  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:54.267595  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:54.267623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:54.322957  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:54.322991  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:56.839135  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:56.854872  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:56.854954  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:56.883302  546345 cri.go:89] found id: ""
	I1202 22:31:56.883327  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.883335  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:56.883342  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:56.883400  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:56.909437  546345 cri.go:89] found id: ""
	I1202 22:31:56.909478  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.909495  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:56.909502  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:56.909574  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:56.935567  546345 cri.go:89] found id: ""
	I1202 22:31:56.935592  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.935600  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:56.935607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:56.935700  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:56.962296  546345 cri.go:89] found id: ""
	I1202 22:31:56.962322  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.962339  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:56.962352  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:56.962417  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:56.987308  546345 cri.go:89] found id: ""
	I1202 22:31:56.987333  546345 logs.go:282] 0 containers: []
	W1202 22:31:56.987341  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:56.987348  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:56.987409  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:57.017409  546345 cri.go:89] found id: ""
	I1202 22:31:57.017436  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.017444  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:57.017451  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:57.017519  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:31:57.043570  546345 cri.go:89] found id: ""
	I1202 22:31:57.043593  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.043601  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:31:57.043607  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:31:57.043670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:31:57.068973  546345 cri.go:89] found id: ""
	I1202 22:31:57.069005  546345 logs.go:282] 0 containers: []
	W1202 22:31:57.069014  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:31:57.069023  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:31:57.069034  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:31:57.093239  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:31:57.093275  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:31:57.120751  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:31:57.120777  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:31:57.176173  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:31:57.176209  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:31:57.193001  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:31:57.193035  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:31:57.259032  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:31:57.251882    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.252406    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.253992    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.254374    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:31:57.255868    8021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:31:59.760716  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:31:59.771290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:31:59.771364  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:31:59.819477  546345 cri.go:89] found id: ""
	I1202 22:31:59.819507  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.819521  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:31:59.819528  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:31:59.819609  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:31:59.879132  546345 cri.go:89] found id: ""
	I1202 22:31:59.879159  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.879168  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:31:59.879175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:31:59.879235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:31:59.909985  546345 cri.go:89] found id: ""
	I1202 22:31:59.910011  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.910020  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:31:59.910027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:31:59.910083  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:31:59.934326  546345 cri.go:89] found id: ""
	I1202 22:31:59.934350  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.934359  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:31:59.934366  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:31:59.934424  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:31:59.963200  546345 cri.go:89] found id: ""
	I1202 22:31:59.963224  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.963233  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:31:59.963240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:31:59.963327  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:31:59.989148  546345 cri.go:89] found id: ""
	I1202 22:31:59.989180  546345 logs.go:282] 0 containers: []
	W1202 22:31:59.989190  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:31:59.989196  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:31:59.989302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:00.074954  546345 cri.go:89] found id: ""
	I1202 22:32:00.075036  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.075063  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:00.075085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:00.075215  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:00.226233  546345 cri.go:89] found id: ""
	I1202 22:32:00.226259  546345 logs.go:282] 0 containers: []
	W1202 22:32:00.226269  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:00.226279  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:00.226293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:00.336324  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:00.336441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:00.371299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:00.371905  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:00.484267  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:00.475120    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.475618    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.477981    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.479118    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:00.480037    8118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:00.484297  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:00.484311  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:00.512091  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:00.512128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.068479  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:03.078801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:03.078893  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:03.103732  546345 cri.go:89] found id: ""
	I1202 22:32:03.103758  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.103766  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:03.103773  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:03.103832  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:03.128397  546345 cri.go:89] found id: ""
	I1202 22:32:03.128426  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.128435  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:03.128441  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:03.128501  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:03.153803  546345 cri.go:89] found id: ""
	I1202 22:32:03.153877  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.153899  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:03.153913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:03.153988  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:03.181014  546345 cri.go:89] found id: ""
	I1202 22:32:03.181038  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.181047  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:03.181053  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:03.181152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:03.210807  546345 cri.go:89] found id: ""
	I1202 22:32:03.210834  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.210843  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:03.210850  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:03.210911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:03.239226  546345 cri.go:89] found id: ""
	I1202 22:32:03.239251  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.239260  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:03.239267  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:03.239326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:03.263944  546345 cri.go:89] found id: ""
	I1202 22:32:03.263969  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.263978  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:03.263984  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:03.264044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:03.287558  546345 cri.go:89] found id: ""
	I1202 22:32:03.287583  546345 logs.go:282] 0 containers: []
	W1202 22:32:03.287592  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:03.287601  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:03.287612  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:03.311743  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:03.311776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:03.343056  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:03.343083  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:03.397595  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:03.397629  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:03.413119  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:03.413155  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:03.475280  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:03.468130    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.468858    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470478    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.470758    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:03.472212    8244 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:05.975590  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:05.985554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:05.985622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:06.019132  546345 cri.go:89] found id: ""
	I1202 22:32:06.019157  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.019166  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:06.019173  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:06.019241  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:06.044254  546345 cri.go:89] found id: ""
	I1202 22:32:06.044277  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.044286  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:06.044293  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:06.044357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:06.073518  546345 cri.go:89] found id: ""
	I1202 22:32:06.073541  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.073550  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:06.073556  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:06.073619  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:06.103333  546345 cri.go:89] found id: ""
	I1202 22:32:06.103400  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.103431  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:06.103450  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:06.103539  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:06.129000  546345 cri.go:89] found id: ""
	I1202 22:32:06.129036  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.129051  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:06.129058  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:06.129128  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:06.155243  546345 cri.go:89] found id: ""
	I1202 22:32:06.155266  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.155274  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:06.155281  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:06.155341  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:06.183834  546345 cri.go:89] found id: ""
	I1202 22:32:06.183900  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.183923  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:06.183942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:06.184033  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:06.208508  546345 cri.go:89] found id: ""
	I1202 22:32:06.208546  546345 logs.go:282] 0 containers: []
	W1202 22:32:06.208556  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:06.208566  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:06.208578  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:06.265928  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:06.265966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:06.281782  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:06.281811  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:06.341568  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:06.333544    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.334347    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.335275    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.336735    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:06.337307    8345 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:06.341591  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:06.341603  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:06.366403  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:06.366435  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:08.899765  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:08.910234  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:08.910306  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:08.940951  546345 cri.go:89] found id: ""
	I1202 22:32:08.940979  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.940989  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:08.940995  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:08.941054  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:08.966172  546345 cri.go:89] found id: ""
	I1202 22:32:08.966198  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.966207  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:08.966214  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:08.966274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:08.990534  546345 cri.go:89] found id: ""
	I1202 22:32:08.990561  546345 logs.go:282] 0 containers: []
	W1202 22:32:08.990569  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:08.990576  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:08.990633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:09.016942  546345 cri.go:89] found id: ""
	I1202 22:32:09.016970  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.016979  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:09.016986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:09.017052  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:09.040852  546345 cri.go:89] found id: ""
	I1202 22:32:09.040893  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.040902  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:09.040909  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:09.040978  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:09.064884  546345 cri.go:89] found id: ""
	I1202 22:32:09.064958  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.064986  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:09.065005  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:09.065114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:09.088807  546345 cri.go:89] found id: ""
	I1202 22:32:09.088878  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.088903  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:09.088922  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:09.089011  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:09.115024  546345 cri.go:89] found id: ""
	I1202 22:32:09.115051  546345 logs.go:282] 0 containers: []
	W1202 22:32:09.115060  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:09.115069  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:09.115080  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:09.138651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:09.138687  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:09.165425  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:09.165449  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:09.222720  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:09.222752  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:09.238413  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:09.238441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:09.299159  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:09.292446    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.292889    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294367    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.294689    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:09.296107    8473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:11.799390  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:11.813803  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:11.813890  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:11.853262  546345 cri.go:89] found id: ""
	I1202 22:32:11.853298  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.853311  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:11.853318  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:11.853394  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:11.898451  546345 cri.go:89] found id: ""
	I1202 22:32:11.898474  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.898482  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:11.898489  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:11.898549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:11.926743  546345 cri.go:89] found id: ""
	I1202 22:32:11.926817  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.926840  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:11.926860  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:11.926980  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:11.950985  546345 cri.go:89] found id: ""
	I1202 22:32:11.951011  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.951019  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:11.951027  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:11.951106  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:11.975373  546345 cri.go:89] found id: ""
	I1202 22:32:11.975399  546345 logs.go:282] 0 containers: []
	W1202 22:32:11.975407  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:11.975414  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:11.975490  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:12.005482  546345 cri.go:89] found id: ""
	I1202 22:32:12.005511  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.005521  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:12.005529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:12.005643  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:12.032572  546345 cri.go:89] found id: ""
	I1202 22:32:12.032597  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.032607  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:12.032634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:12.032733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:12.059401  546345 cri.go:89] found id: ""
	I1202 22:32:12.059476  546345 logs.go:282] 0 containers: []
	W1202 22:32:12.059492  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:12.059504  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:12.059517  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:12.093142  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:12.093179  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:12.150021  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:12.150054  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:12.165956  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:12.165987  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:12.231857  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:12.225209    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.225713    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227176    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.227478    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:12.228901    8585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:12.231929  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:12.231956  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:14.756725  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:14.767263  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:14.767333  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:14.801672  546345 cri.go:89] found id: ""
	I1202 22:32:14.801697  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.801706  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:14.801713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:14.801770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:14.851488  546345 cri.go:89] found id: ""
	I1202 22:32:14.851517  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.851532  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:14.851538  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:14.851605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:14.888023  546345 cri.go:89] found id: ""
	I1202 22:32:14.888048  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.888057  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:14.888064  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:14.888129  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:14.916001  546345 cri.go:89] found id: ""
	I1202 22:32:14.916053  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.916061  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:14.916068  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:14.916135  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:14.942133  546345 cri.go:89] found id: ""
	I1202 22:32:14.942199  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.942222  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:14.942240  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:14.942326  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:14.967663  546345 cri.go:89] found id: ""
	I1202 22:32:14.967694  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.967702  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:14.967710  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:14.967779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:14.997283  546345 cri.go:89] found id: ""
	I1202 22:32:14.997360  546345 logs.go:282] 0 containers: []
	W1202 22:32:14.997398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:14.997424  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:14.997514  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:15.028362  546345 cri.go:89] found id: ""
	I1202 22:32:15.028443  546345 logs.go:282] 0 containers: []
	W1202 22:32:15.028481  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:15.028510  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:15.028577  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:15.084989  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:15.085026  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:15.101099  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:15.101135  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:15.163640  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:15.156494    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.157156    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.158849    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.159159    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:15.160627    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:15.163661  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:15.163673  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:15.188815  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:15.188850  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:17.720502  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:17.730835  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:17.730906  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:17.754959  546345 cri.go:89] found id: ""
	I1202 22:32:17.754985  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.754994  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:17.755001  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:17.755058  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:17.779124  546345 cri.go:89] found id: ""
	I1202 22:32:17.779145  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.779153  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:17.779159  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:17.779216  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:17.861624  546345 cri.go:89] found id: ""
	I1202 22:32:17.861647  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.861670  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:17.861676  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:17.861733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:17.891578  546345 cri.go:89] found id: ""
	I1202 22:32:17.891604  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.891612  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:17.891620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:17.891677  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:17.914983  546345 cri.go:89] found id: ""
	I1202 22:32:17.915005  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.915013  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:17.915019  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:17.915075  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:17.938893  546345 cri.go:89] found id: ""
	I1202 22:32:17.938923  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.938932  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:17.938939  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:17.938997  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:17.964896  546345 cri.go:89] found id: ""
	I1202 22:32:17.964960  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.964983  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:17.964997  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:17.965076  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:17.988828  546345 cri.go:89] found id: ""
	I1202 22:32:17.988863  546345 logs.go:282] 0 containers: []
	W1202 22:32:17.988872  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:17.988882  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:17.988893  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:18.022032  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:18.022059  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:18.077598  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:18.077635  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:18.095143  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:18.095184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:18.157395  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:18.150066    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.150607    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152261    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.152789    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:18.154364    8809 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:18.157426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:18.157439  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:20.681946  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:20.692713  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:20.692790  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:20.716255  546345 cri.go:89] found id: ""
	I1202 22:32:20.716281  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.716290  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:20.716297  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:20.716355  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:20.743603  546345 cri.go:89] found id: ""
	I1202 22:32:20.743629  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.743638  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:20.743645  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:20.743705  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:20.768770  546345 cri.go:89] found id: ""
	I1202 22:32:20.768798  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.768807  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:20.768814  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:20.768878  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:20.805921  546345 cri.go:89] found id: ""
	I1202 22:32:20.805945  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.805954  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:20.805960  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:20.806018  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:20.882456  546345 cri.go:89] found id: ""
	I1202 22:32:20.882478  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.882486  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:20.882493  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:20.882548  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:20.906709  546345 cri.go:89] found id: ""
	I1202 22:32:20.906732  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.906740  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:20.906747  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:20.906803  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:20.930871  546345 cri.go:89] found id: ""
	I1202 22:32:20.930947  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.930970  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:20.930985  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:20.931072  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:20.954799  546345 cri.go:89] found id: ""
	I1202 22:32:20.954823  546345 logs.go:282] 0 containers: []
	W1202 22:32:20.954832  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:20.954841  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:20.954853  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:20.982221  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:20.982253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:21.038726  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:21.038763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:21.054186  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:21.054213  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:21.118780  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:21.110603    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.111205    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.112835    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.113193    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:21.114862    8923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:21.118846  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:21.118868  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.643583  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:23.655825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:23.655896  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:23.680044  546345 cri.go:89] found id: ""
	I1202 22:32:23.680070  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.680079  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:23.680085  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:23.680143  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:23.708984  546345 cri.go:89] found id: ""
	I1202 22:32:23.709009  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.709017  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:23.709024  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:23.709082  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:23.734044  546345 cri.go:89] found id: ""
	I1202 22:32:23.734068  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.734076  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:23.734082  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:23.734142  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:23.763083  546345 cri.go:89] found id: ""
	I1202 22:32:23.763110  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.763118  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:23.763125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:23.763183  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:23.809231  546345 cri.go:89] found id: ""
	I1202 22:32:23.809254  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.809262  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:23.809269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:23.809328  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:23.877561  546345 cri.go:89] found id: ""
	I1202 22:32:23.877585  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.877593  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:23.877600  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:23.877685  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:23.900843  546345 cri.go:89] found id: ""
	I1202 22:32:23.900870  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.900879  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:23.900885  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:23.900948  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:23.926458  546345 cri.go:89] found id: ""
	I1202 22:32:23.926497  546345 logs.go:282] 0 containers: []
	W1202 22:32:23.926506  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:23.926515  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:23.926526  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:23.951259  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:23.951296  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:23.979352  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:23.979421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:24.036927  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:24.036965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:24.052889  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:24.052925  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:24.114973  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:24.108057    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.108597    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110063    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.110491    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:24.111943    9038 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.615216  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:26.625455  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:26.625533  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:26.651390  546345 cri.go:89] found id: ""
	I1202 22:32:26.651423  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.651432  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:26.651439  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:26.651508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:26.677027  546345 cri.go:89] found id: ""
	I1202 22:32:26.677052  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.677060  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:26.677067  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:26.677127  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:26.706368  546345 cri.go:89] found id: ""
	I1202 22:32:26.706391  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.706400  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:26.706406  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:26.706469  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:26.730421  546345 cri.go:89] found id: ""
	I1202 22:32:26.730445  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.730453  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:26.730460  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:26.730525  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:26.754523  546345 cri.go:89] found id: ""
	I1202 22:32:26.754552  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.754561  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:26.754569  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:26.754633  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:26.779516  546345 cri.go:89] found id: ""
	I1202 22:32:26.779545  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.779554  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:26.779568  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:26.779632  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:26.823212  546345 cri.go:89] found id: ""
	I1202 22:32:26.823237  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.823246  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:26.823253  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:26.823313  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:26.858245  546345 cri.go:89] found id: ""
	I1202 22:32:26.858282  546345 logs.go:282] 0 containers: []
	W1202 22:32:26.858291  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:26.858300  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:26.858313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:26.917465  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:26.917500  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:26.933252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:26.933281  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:26.995404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:26.986840    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.987445    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989131    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.989768    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:26.991326    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:26.995426  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:26.995438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:27.021457  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:27.021490  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.552148  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:29.562514  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:29.562594  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:29.587012  546345 cri.go:89] found id: ""
	I1202 22:32:29.587037  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.587046  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:29.587079  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:29.587163  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:29.613219  546345 cri.go:89] found id: ""
	I1202 22:32:29.613246  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.613254  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:29.613261  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:29.613321  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:29.638585  546345 cri.go:89] found id: ""
	I1202 22:32:29.638611  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.638619  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:29.638626  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:29.638682  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:29.663132  546345 cri.go:89] found id: ""
	I1202 22:32:29.663208  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.663225  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:29.663232  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:29.663304  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:29.686925  546345 cri.go:89] found id: ""
	I1202 22:32:29.686947  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.686955  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:29.686961  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:29.687021  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:29.711947  546345 cri.go:89] found id: ""
	I1202 22:32:29.711971  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.711979  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:29.711986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:29.712047  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:29.735873  546345 cri.go:89] found id: ""
	I1202 22:32:29.735940  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.735962  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:29.735988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:29.736071  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:29.764629  546345 cri.go:89] found id: ""
	I1202 22:32:29.764655  546345 logs.go:282] 0 containers: []
	W1202 22:32:29.764664  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:29.764674  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:29.764685  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:29.789251  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:29.789289  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:29.859060  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:29.859085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:29.927618  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:29.927653  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:29.944397  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:29.944477  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:30.015300  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:30.004451    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.006385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.007099    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.009385    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:30.010389    9263 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.515559  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:32.525887  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:32.525957  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:32.549813  546345 cri.go:89] found id: ""
	I1202 22:32:32.549848  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.549857  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:32.549865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:32.549931  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:32.575230  546345 cri.go:89] found id: ""
	I1202 22:32:32.575253  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.575261  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:32.575268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:32.575359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:32.600349  546345 cri.go:89] found id: ""
	I1202 22:32:32.600374  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.600382  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:32.600389  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:32.600448  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:32.629053  546345 cri.go:89] found id: ""
	I1202 22:32:32.629078  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.629086  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:32.629095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:32.629152  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:32.653727  546345 cri.go:89] found id: ""
	I1202 22:32:32.653750  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.653759  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:32.653766  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:32.653824  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:32.677981  546345 cri.go:89] found id: ""
	I1202 22:32:32.678019  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.678028  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:32.678035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:32.678101  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:32.702199  546345 cri.go:89] found id: ""
	I1202 22:32:32.702222  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.702230  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:32.702237  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:32.702294  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:32.725924  546345 cri.go:89] found id: ""
	I1202 22:32:32.725957  546345 logs.go:282] 0 containers: []
	W1202 22:32:32.725967  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:32.725976  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:32.726002  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:32.779589  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:32.779623  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:32.807508  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:32.807541  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:32.902366  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:32.894161    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.894903    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896591    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.896873    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:32.898388    9366 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:32.902386  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:32.902399  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:32.925648  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:32.925948  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:35.456822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:35.467636  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:35.467796  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:35.496302  546345 cri.go:89] found id: ""
	I1202 22:32:35.496328  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.496337  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:35.496343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:35.496407  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:35.525080  546345 cri.go:89] found id: ""
	I1202 22:32:35.525107  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.525116  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:35.525122  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:35.525187  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:35.549407  546345 cri.go:89] found id: ""
	I1202 22:32:35.549432  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.549441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:35.549447  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:35.549505  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:35.574018  546345 cri.go:89] found id: ""
	I1202 22:32:35.574040  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.574049  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:35.574056  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:35.574115  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:35.604104  546345 cri.go:89] found id: ""
	I1202 22:32:35.604128  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.604137  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:35.604143  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:35.604201  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:35.629312  546345 cri.go:89] found id: ""
	I1202 22:32:35.629346  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.629355  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:35.629361  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:35.629427  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:35.653959  546345 cri.go:89] found id: ""
	I1202 22:32:35.653987  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.653996  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:35.654003  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:35.654064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:35.678225  546345 cri.go:89] found id: ""
	I1202 22:32:35.678301  546345 logs.go:282] 0 containers: []
	W1202 22:32:35.678325  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:35.678343  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:35.678368  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:35.733851  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:35.733884  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:35.749526  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:35.749554  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:35.844900  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:35.824762    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.828451    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.838437    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.839235    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:35.840948    9473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:35.844925  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:35.844940  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:35.882135  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:35.882168  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:38.412949  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:38.423327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:38.423399  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:38.447069  546345 cri.go:89] found id: ""
	I1202 22:32:38.447097  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.447107  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:38.447148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:38.447205  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:38.473526  546345 cri.go:89] found id: ""
	I1202 22:32:38.473549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.473558  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:38.473565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:38.473626  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:38.501943  546345 cri.go:89] found id: ""
	I1202 22:32:38.501974  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.501984  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:38.501990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:38.502049  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:38.526634  546345 cri.go:89] found id: ""
	I1202 22:32:38.526657  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.526666  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:38.526672  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:38.526730  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:38.555523  546345 cri.go:89] found id: ""
	I1202 22:32:38.555549  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.555558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:38.555564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:38.555622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:38.579778  546345 cri.go:89] found id: ""
	I1202 22:32:38.579804  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.579812  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:38.579819  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:38.579875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:38.605528  546345 cri.go:89] found id: ""
	I1202 22:32:38.605589  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.605613  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:38.605633  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:38.605733  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:38.629391  546345 cri.go:89] found id: ""
	I1202 22:32:38.629412  546345 logs.go:282] 0 containers: []
	W1202 22:32:38.629421  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:38.629429  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:38.629441  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:38.684729  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:38.684763  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:38.699841  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:38.699916  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:38.767359  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:38.760357    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.761047    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762602    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.762883    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:38.764331    9586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:38.767378  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:38.767391  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:38.792073  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:38.792104  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.385000  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:41.395673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:41.395741  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:41.420534  546345 cri.go:89] found id: ""
	I1202 22:32:41.420574  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.420586  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:41.420593  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:41.420652  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:41.445534  546345 cri.go:89] found id: ""
	I1202 22:32:41.445559  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.445567  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:41.445573  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:41.445635  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:41.470438  546345 cri.go:89] found id: ""
	I1202 22:32:41.470463  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.470473  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:41.470481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:41.470551  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:41.495013  546345 cri.go:89] found id: ""
	I1202 22:32:41.495037  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.495045  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:41.495052  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:41.495139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:41.520340  546345 cri.go:89] found id: ""
	I1202 22:32:41.520375  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.520385  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:41.520392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:41.520488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:41.545599  546345 cri.go:89] found id: ""
	I1202 22:32:41.545633  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.545642  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:41.545649  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:41.545753  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:41.570203  546345 cri.go:89] found id: ""
	I1202 22:32:41.570227  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.570235  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:41.570241  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:41.570317  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:41.595416  546345 cri.go:89] found id: ""
	I1202 22:32:41.595442  546345 logs.go:282] 0 containers: []
	W1202 22:32:41.595451  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:41.595461  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:41.595493  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:41.622428  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:41.622456  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:41.678602  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:41.678634  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:41.694624  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:41.694654  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:41.757051  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:41.749001    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.749418    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751146    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.751874    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:41.753439    9712 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:41.757072  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:41.757085  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.281854  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:44.292430  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:44.292510  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:44.317241  546345 cri.go:89] found id: ""
	I1202 22:32:44.317271  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.317279  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:44.317286  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:44.317350  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:44.341824  546345 cri.go:89] found id: ""
	I1202 22:32:44.341849  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.341857  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:44.341865  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:44.341926  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:44.366036  546345 cri.go:89] found id: ""
	I1202 22:32:44.366061  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.366070  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:44.366077  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:44.366139  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:44.391175  546345 cri.go:89] found id: ""
	I1202 22:32:44.391200  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.391209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:44.391216  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:44.391292  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:44.420090  546345 cri.go:89] found id: ""
	I1202 22:32:44.420123  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.420132  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:44.420155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:44.420234  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:44.444490  546345 cri.go:89] found id: ""
	I1202 22:32:44.444540  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.444549  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:44.444557  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:44.444612  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:44.470392  546345 cri.go:89] found id: ""
	I1202 22:32:44.470419  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.470427  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:44.470434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:44.470493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:44.495601  546345 cri.go:89] found id: ""
	I1202 22:32:44.495624  546345 logs.go:282] 0 containers: []
	W1202 22:32:44.495633  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:44.495664  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:44.495690  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:44.549795  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:44.549886  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:44.567082  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:44.567110  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:44.632540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:44.624658    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.625347    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.626939    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.627534    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:44.629113    9814 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:44.632570  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:44.632582  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:44.657144  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:44.657180  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.185793  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:47.196271  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:47.196339  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:47.226550  546345 cri.go:89] found id: ""
	I1202 22:32:47.226572  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.226581  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:47.226588  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:47.226645  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:47.250706  546345 cri.go:89] found id: ""
	I1202 22:32:47.250732  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.250741  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:47.250748  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:47.250811  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:47.280047  546345 cri.go:89] found id: ""
	I1202 22:32:47.280072  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.280081  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:47.280088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:47.280154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:47.306607  546345 cri.go:89] found id: ""
	I1202 22:32:47.306633  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.306642  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:47.306651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:47.306718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:47.330953  546345 cri.go:89] found id: ""
	I1202 22:32:47.331024  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.331038  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:47.331045  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:47.331105  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:47.360182  546345 cri.go:89] found id: ""
	I1202 22:32:47.360206  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.360215  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:47.360222  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:47.360293  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:47.388010  546345 cri.go:89] found id: ""
	I1202 22:32:47.388032  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.388041  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:47.388048  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:47.388114  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:47.415262  546345 cri.go:89] found id: ""
	I1202 22:32:47.415294  546345 logs.go:282] 0 containers: []
	W1202 22:32:47.415303  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:47.415312  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:47.415326  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:47.433260  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:47.433288  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:47.497337  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:47.489370    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.490186    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.491743    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.492249    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:47.493701    9931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:47.497366  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:47.497378  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:47.521722  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:47.521801  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:47.548995  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:47.549027  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:50.107291  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:50.119155  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:50.119230  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:50.144228  546345 cri.go:89] found id: ""
	I1202 22:32:50.144252  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.144261  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:50.144268  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:50.144329  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:50.172928  546345 cri.go:89] found id: ""
	I1202 22:32:50.172951  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.172959  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:50.172966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:50.173027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:50.201752  546345 cri.go:89] found id: ""
	I1202 22:32:50.201795  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.201804  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:50.201811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:50.201873  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:50.225118  546345 cri.go:89] found id: ""
	I1202 22:32:50.225139  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.225148  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:50.225154  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:50.225217  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:50.251396  546345 cri.go:89] found id: ""
	I1202 22:32:50.251421  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.251430  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:50.251437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:50.251495  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:50.278859  546345 cri.go:89] found id: ""
	I1202 22:32:50.278887  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.278896  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:50.278903  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:50.278961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:50.302858  546345 cri.go:89] found id: ""
	I1202 22:32:50.302891  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.302900  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:50.302907  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:50.302972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:50.330618  546345 cri.go:89] found id: ""
	I1202 22:32:50.330642  546345 logs.go:282] 0 containers: []
	W1202 22:32:50.330650  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:50.330659  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:50.330670  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:50.347121  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:50.347147  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:50.414460  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:50.406836   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.407605   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409232   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.409526   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:50.410953   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:50.414482  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:50.414496  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:50.438651  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:50.438682  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:50.466506  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:50.466532  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.022126  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:53.032606  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:53.032678  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:53.068046  546345 cri.go:89] found id: ""
	I1202 22:32:53.068078  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.068088  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:53.068095  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:53.068154  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:53.130393  546345 cri.go:89] found id: ""
	I1202 22:32:53.130414  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.130423  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:53.130429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:53.130488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:53.156458  546345 cri.go:89] found id: ""
	I1202 22:32:53.156481  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.156498  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:53.156504  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:53.156564  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:53.180994  546345 cri.go:89] found id: ""
	I1202 22:32:53.181067  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.181090  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:53.181110  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:53.181196  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:53.204951  546345 cri.go:89] found id: ""
	I1202 22:32:53.204976  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.204985  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:53.204993  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:53.205053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:53.232863  546345 cri.go:89] found id: ""
	I1202 22:32:53.232896  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.232905  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:53.232912  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:53.232981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:53.263355  546345 cri.go:89] found id: ""
	I1202 22:32:53.263381  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.263390  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:53.263396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:53.263454  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:53.288048  546345 cri.go:89] found id: ""
	I1202 22:32:53.288074  546345 logs.go:282] 0 containers: []
	W1202 22:32:53.288082  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:53.288092  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:53.288103  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:53.343380  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:53.343416  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:53.359279  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:53.359304  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:53.426667  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:53.418963   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.419594   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421185   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.421729   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:53.423366   10161 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:53.426690  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:53.426703  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:53.451602  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:53.451640  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:55.979195  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:55.989644  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:55.989738  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:56.016824  546345 cri.go:89] found id: ""
	I1202 22:32:56.016857  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.016866  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:56.016873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:56.016939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:56.061785  546345 cri.go:89] found id: ""
	I1202 22:32:56.061833  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.061846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:56.061854  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:56.061938  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:56.099312  546345 cri.go:89] found id: ""
	I1202 22:32:56.099341  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.099351  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:56.099359  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:56.099422  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:56.133179  546345 cri.go:89] found id: ""
	I1202 22:32:56.133209  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.133217  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:56.133224  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:56.133285  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:56.166397  546345 cri.go:89] found id: ""
	I1202 22:32:56.166420  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.166429  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:56.166435  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:56.166493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:56.191240  546345 cri.go:89] found id: ""
	I1202 22:32:56.191300  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.191323  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:56.191343  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:56.191406  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:56.219940  546345 cri.go:89] found id: ""
	I1202 22:32:56.219966  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.219975  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:56.219982  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:56.220042  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:56.245089  546345 cri.go:89] found id: ""
	I1202 22:32:56.245116  546345 logs.go:282] 0 containers: []
	W1202 22:32:56.245125  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:56.245134  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:56.245145  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:32:56.275969  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:56.275995  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:56.330353  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:56.330388  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:56.346262  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:56.346293  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:56.411285  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:56.403890   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.404747   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406252   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.406670   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:56.408188   10283 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:56.411307  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:56.411320  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:58.937516  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:32:58.947690  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:32:58.947760  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:32:58.971175  546345 cri.go:89] found id: ""
	I1202 22:32:58.971209  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.971221  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:32:58.971229  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:32:58.971289  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:32:58.995437  546345 cri.go:89] found id: ""
	I1202 22:32:58.995465  546345 logs.go:282] 0 containers: []
	W1202 22:32:58.995474  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:32:58.995481  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:32:58.995538  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:32:59.021289  546345 cri.go:89] found id: ""
	I1202 22:32:59.021315  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.021323  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:32:59.021329  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:32:59.021388  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:32:59.067648  546345 cri.go:89] found id: ""
	I1202 22:32:59.067676  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.067684  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:32:59.067691  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:32:59.067752  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:32:59.120318  546345 cri.go:89] found id: ""
	I1202 22:32:59.120353  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.120362  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:32:59.120369  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:32:59.120435  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:32:59.147811  546345 cri.go:89] found id: ""
	I1202 22:32:59.147845  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.147855  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:32:59.147862  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:32:59.147929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:32:59.176414  546345 cri.go:89] found id: ""
	I1202 22:32:59.176448  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.176456  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:32:59.176463  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:32:59.176534  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:32:59.202001  546345 cri.go:89] found id: ""
	I1202 22:32:59.202027  546345 logs.go:282] 0 containers: []
	W1202 22:32:59.202035  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:32:59.202045  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:32:59.202056  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:32:59.257545  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:32:59.257581  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:32:59.273305  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:32:59.273385  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:32:59.335480  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:32:59.328097   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.328843   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330336   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.330876   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:32:59.332477   10386 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:32:59.335501  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:32:59.335514  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:32:59.359981  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:32:59.360017  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:01.886549  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:01.897148  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:01.897222  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:01.923194  546345 cri.go:89] found id: ""
	I1202 22:33:01.923220  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.923229  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:01.923236  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:01.923295  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:01.947898  546345 cri.go:89] found id: ""
	I1202 22:33:01.947922  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.947930  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:01.947937  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:01.947996  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:01.977128  546345 cri.go:89] found id: ""
	I1202 22:33:01.977153  546345 logs.go:282] 0 containers: []
	W1202 22:33:01.977161  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:01.977167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:01.977226  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:02.004541  546345 cri.go:89] found id: ""
	I1202 22:33:02.004569  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.004578  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:02.004586  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:02.004660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:02.034163  546345 cri.go:89] found id: ""
	I1202 22:33:02.034189  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.034199  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:02.034206  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:02.034302  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:02.103583  546345 cri.go:89] found id: ""
	I1202 22:33:02.103619  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.103628  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:02.103651  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:02.103732  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:02.141546  546345 cri.go:89] found id: ""
	I1202 22:33:02.141581  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.141590  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:02.141597  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:02.141672  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:02.166780  546345 cri.go:89] found id: ""
	I1202 22:33:02.166805  546345 logs.go:282] 0 containers: []
	W1202 22:33:02.166815  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:02.166824  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:02.166835  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:02.191150  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:02.191186  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:02.222079  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:02.222108  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:02.279420  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:02.279453  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:02.295466  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:02.295494  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:02.371035  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:02.361373   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.362484   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.364780   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.365388   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:02.366360   10513 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:04.872723  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:04.882988  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:04.883064  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:04.906907  546345 cri.go:89] found id: ""
	I1202 22:33:04.906931  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.906940  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:04.906947  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:04.907006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:04.931077  546345 cri.go:89] found id: ""
	I1202 22:33:04.931102  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.931111  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:04.931119  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:04.931176  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:04.954232  546345 cri.go:89] found id: ""
	I1202 22:33:04.954258  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.954266  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:04.954273  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:04.954332  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:04.978316  546345 cri.go:89] found id: ""
	I1202 22:33:04.978339  546345 logs.go:282] 0 containers: []
	W1202 22:33:04.978347  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:04.978354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:04.978412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:05.008227  546345 cri.go:89] found id: ""
	I1202 22:33:05.008253  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.008261  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:05.008269  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:05.008401  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:05.037911  546345 cri.go:89] found id: ""
	I1202 22:33:05.037948  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.037957  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:05.037964  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:05.038041  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:05.115835  546345 cri.go:89] found id: ""
	I1202 22:33:05.115860  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.115869  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:05.115876  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:05.115944  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:05.142576  546345 cri.go:89] found id: ""
	I1202 22:33:05.142599  546345 logs.go:282] 0 containers: []
	W1202 22:33:05.142608  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:05.142617  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:05.142628  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:05.172774  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:05.172802  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:05.229451  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:05.229486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:05.245158  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:05.245184  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:05.308964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:05.301260   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.302075   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.303718   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.304189   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:05.305899   10624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:05.308985  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:05.309000  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:07.834473  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:07.845693  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:07.845780  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:07.870140  546345 cri.go:89] found id: ""
	I1202 22:33:07.870162  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.870171  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:07.870178  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:07.870238  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:07.894539  546345 cri.go:89] found id: ""
	I1202 22:33:07.894562  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.894570  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:07.894583  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:07.894640  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:07.918644  546345 cri.go:89] found id: ""
	I1202 22:33:07.918672  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.918681  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:07.918688  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:07.918751  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:07.942273  546345 cri.go:89] found id: ""
	I1202 22:33:07.942296  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.942304  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:07.942310  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:07.942367  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:07.965678  546345 cri.go:89] found id: ""
	I1202 22:33:07.965703  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.965712  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:07.965718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:07.965775  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:07.989455  546345 cri.go:89] found id: ""
	I1202 22:33:07.989480  546345 logs.go:282] 0 containers: []
	W1202 22:33:07.989489  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:07.989496  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:07.989556  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:08.015583  546345 cri.go:89] found id: ""
	I1202 22:33:08.015608  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.015617  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:08.015624  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:08.015686  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:08.068697  546345 cri.go:89] found id: ""
	I1202 22:33:08.068724  546345 logs.go:282] 0 containers: []
	W1202 22:33:08.068734  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:08.068745  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:08.068768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:08.112700  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:08.112750  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:08.148124  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:08.148159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:08.208343  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:08.208384  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:08.224299  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:08.224331  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:08.287847  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:08.279728   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.280541   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282177   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.282779   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:08.284345   10741 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:10.788102  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:10.798373  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:10.798493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:10.826690  546345 cri.go:89] found id: ""
	I1202 22:33:10.826715  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.826724  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:10.826731  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:10.826791  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:10.857739  546345 cri.go:89] found id: ""
	I1202 22:33:10.857765  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.857773  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:10.857780  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:10.857841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:10.886900  546345 cri.go:89] found id: ""
	I1202 22:33:10.886926  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.886935  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:10.886942  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:10.887001  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:10.915788  546345 cri.go:89] found id: ""
	I1202 22:33:10.915811  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.915820  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:10.915826  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:10.915883  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:10.940846  546345 cri.go:89] found id: ""
	I1202 22:33:10.940869  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.940877  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:10.940883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:10.940942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:10.969358  546345 cri.go:89] found id: ""
	I1202 22:33:10.969380  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.969389  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:10.969396  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:10.969452  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:10.994365  546345 cri.go:89] found id: ""
	I1202 22:33:10.994389  546345 logs.go:282] 0 containers: []
	W1202 22:33:10.994398  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:10.994405  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:10.994488  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:11.021354  546345 cri.go:89] found id: ""
	I1202 22:33:11.021376  546345 logs.go:282] 0 containers: []
	W1202 22:33:11.021387  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:11.021396  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:11.021406  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:11.096880  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:11.096922  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:11.115249  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:11.115286  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:11.192270  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:11.184836   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.185321   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.186779   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.187091   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:11.188512   10842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:11.192290  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:11.192305  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:11.216801  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:11.216838  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:13.747802  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:13.758663  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:13.758739  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:13.784138  546345 cri.go:89] found id: ""
	I1202 22:33:13.784160  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.784169  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:13.784175  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:13.784242  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:13.810746  546345 cri.go:89] found id: ""
	I1202 22:33:13.810768  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.810777  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:13.810783  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:13.810841  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:13.834531  546345 cri.go:89] found id: ""
	I1202 22:33:13.834563  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.834571  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:13.834578  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:13.834644  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:13.858698  546345 cri.go:89] found id: ""
	I1202 22:33:13.858721  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.858729  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:13.858736  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:13.858798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:13.882726  546345 cri.go:89] found id: ""
	I1202 22:33:13.882749  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.882757  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:13.882764  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:13.882822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:13.908263  546345 cri.go:89] found id: ""
	I1202 22:33:13.908287  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.908296  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:13.908302  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:13.908359  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:13.933266  546345 cri.go:89] found id: ""
	I1202 22:33:13.933290  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.933298  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:13.933304  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:13.933361  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:13.957668  546345 cri.go:89] found id: ""
	I1202 22:33:13.957738  546345 logs.go:282] 0 containers: []
	W1202 22:33:13.957753  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:13.957764  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:13.957776  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:13.983158  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:13.983193  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:14.013404  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:14.013434  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:14.076941  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:14.076982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:14.122673  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:14.122701  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:14.186208  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:14.178781   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.179568   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.180719   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.181367   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:14.183063   10970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:16.686471  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:16.697167  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:16.697255  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:16.721334  546345 cri.go:89] found id: ""
	I1202 22:33:16.721358  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.721367  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:16.721374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:16.721439  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:16.744849  546345 cri.go:89] found id: ""
	I1202 22:33:16.744875  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.744887  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:16.744893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:16.744950  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:16.768289  546345 cri.go:89] found id: ""
	I1202 22:33:16.768315  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.768324  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:16.768330  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:16.768390  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:16.793721  546345 cri.go:89] found id: ""
	I1202 22:33:16.793745  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.793754  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:16.793761  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:16.793822  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:16.819397  546345 cri.go:89] found id: ""
	I1202 22:33:16.819419  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.819427  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:16.819434  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:16.819493  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:16.847655  546345 cri.go:89] found id: ""
	I1202 22:33:16.847682  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.847691  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:16.847699  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:16.847779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:16.872502  546345 cri.go:89] found id: ""
	I1202 22:33:16.872527  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.872535  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:16.872542  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:16.872605  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:16.904922  546345 cri.go:89] found id: ""
	I1202 22:33:16.904953  546345 logs.go:282] 0 containers: []
	W1202 22:33:16.904968  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:16.904978  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:16.904990  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:16.929494  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:16.929529  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:16.960812  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:16.960840  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:17.015332  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:17.015369  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:17.031163  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:17.031192  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:17.146404  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:17.138582   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.139239   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.140866   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.141549   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:17.143289   11076 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.646668  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:19.656904  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:19.656972  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:19.681366  546345 cri.go:89] found id: ""
	I1202 22:33:19.681390  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.681397  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:19.681404  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:19.681462  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:19.705682  546345 cri.go:89] found id: ""
	I1202 22:33:19.705711  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.705720  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:19.705726  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:19.705782  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:19.728889  546345 cri.go:89] found id: ""
	I1202 22:33:19.728913  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.728921  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:19.728928  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:19.728986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:19.753177  546345 cri.go:89] found id: ""
	I1202 22:33:19.753200  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.753209  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:19.753215  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:19.753275  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:19.777064  546345 cri.go:89] found id: ""
	I1202 22:33:19.777087  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.777095  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:19.777101  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:19.777165  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:19.804440  546345 cri.go:89] found id: ""
	I1202 22:33:19.804462  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.804479  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:19.804487  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:19.804544  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:19.831370  546345 cri.go:89] found id: ""
	I1202 22:33:19.831395  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.831403  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:19.831409  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:19.831470  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:19.854457  546345 cri.go:89] found id: ""
	I1202 22:33:19.854481  546345 logs.go:282] 0 containers: []
	W1202 22:33:19.854489  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:19.854498  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:19.854512  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:19.912020  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:19.912055  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:19.927521  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:19.927549  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:19.988124  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:19.980291   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.980690   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.981977   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.982920   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:19.984635   11176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:19.988188  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:19.988211  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:20.013304  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:20.013341  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:22.562705  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:22.573519  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:22.573597  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:22.603465  546345 cri.go:89] found id: ""
	I1202 22:33:22.603541  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.603556  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:22.603564  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:22.603670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:22.629949  546345 cri.go:89] found id: ""
	I1202 22:33:22.629976  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.629985  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:22.629991  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:22.630051  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:22.660760  546345 cri.go:89] found id: ""
	I1202 22:33:22.660785  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.660794  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:22.660801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:22.660861  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:22.685501  546345 cri.go:89] found id: ""
	I1202 22:33:22.685531  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.685540  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:22.685555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:22.685618  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:22.712679  546345 cri.go:89] found id: ""
	I1202 22:33:22.712714  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.712723  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:22.712730  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:22.712799  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:22.738275  546345 cri.go:89] found id: ""
	I1202 22:33:22.738301  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.738310  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:22.738317  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:22.738437  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:22.767652  546345 cri.go:89] found id: ""
	I1202 22:33:22.767677  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.767686  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:22.767694  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:22.767756  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:22.793810  546345 cri.go:89] found id: ""
	I1202 22:33:22.793836  546345 logs.go:282] 0 containers: []
	W1202 22:33:22.793845  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:22.793854  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:22.793866  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:22.856577  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:22.856615  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:22.872185  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:22.872221  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:22.937005  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:22.929061   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.930043   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.931595   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.932111   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:22.933624   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:22.937039  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:22.937052  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:22.961706  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:22.961743  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.491815  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:25.502275  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:25.502392  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:25.526647  546345 cri.go:89] found id: ""
	I1202 22:33:25.526680  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.526688  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:25.526695  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:25.526767  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:25.554949  546345 cri.go:89] found id: ""
	I1202 22:33:25.554970  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.554980  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:25.554986  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:25.555043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:25.578929  546345 cri.go:89] found id: ""
	I1202 22:33:25.578953  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.578962  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:25.578968  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:25.579044  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:25.608022  546345 cri.go:89] found id: ""
	I1202 22:33:25.608056  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.608065  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:25.608088  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:25.608169  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:25.636085  546345 cri.go:89] found id: ""
	I1202 22:33:25.636120  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.636130  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:25.636153  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:25.636235  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:25.666823  546345 cri.go:89] found id: ""
	I1202 22:33:25.666856  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.666865  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:25.666873  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:25.666942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:25.690601  546345 cri.go:89] found id: ""
	I1202 22:33:25.690635  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.690645  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:25.690652  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:25.690723  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:25.719343  546345 cri.go:89] found id: ""
	I1202 22:33:25.719379  546345 logs.go:282] 0 containers: []
	W1202 22:33:25.719388  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:25.719396  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:25.719408  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:25.743724  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:25.743768  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:25.771761  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:25.771786  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:25.828678  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:25.828713  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:25.844300  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:25.844332  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:25.908308  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:25.900092   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.900613   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902283   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.902859   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:25.904505   11417 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.409045  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:28.420392  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:28.420486  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:28.451663  546345 cri.go:89] found id: ""
	I1202 22:33:28.451687  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.451696  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:28.451704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:28.451770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:28.480763  546345 cri.go:89] found id: ""
	I1202 22:33:28.480788  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.480797  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:28.480804  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:28.480888  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:28.505757  546345 cri.go:89] found id: ""
	I1202 22:33:28.505781  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.505789  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:28.505796  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:28.505882  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:28.530092  546345 cri.go:89] found id: ""
	I1202 22:33:28.530124  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.530134  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:28.530141  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:28.530202  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:28.555441  546345 cri.go:89] found id: ""
	I1202 22:33:28.555468  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.555477  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:28.555484  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:28.555542  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:28.588393  546345 cri.go:89] found id: ""
	I1202 22:33:28.588414  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.588422  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:28.588429  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:28.588498  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:28.615564  546345 cri.go:89] found id: ""
	I1202 22:33:28.615586  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.615595  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:28.615602  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:28.615663  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:28.640294  546345 cri.go:89] found id: ""
	I1202 22:33:28.640316  546345 logs.go:282] 0 containers: []
	W1202 22:33:28.640324  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:28.640333  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:28.640344  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:28.670446  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:28.670473  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:28.731540  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:28.731583  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:28.747338  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:28.747365  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:28.807964  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:28.800513   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.801318   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.802857   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.803139   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:28.804600   11527 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:28.807987  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:28.808001  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.332523  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:31.349889  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:31.349961  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:31.381168  546345 cri.go:89] found id: ""
	I1202 22:33:31.381196  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.381204  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:31.381211  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:31.381274  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:31.408915  546345 cri.go:89] found id: ""
	I1202 22:33:31.408947  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.408956  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:31.408963  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:31.409025  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:31.433408  546345 cri.go:89] found id: ""
	I1202 22:33:31.433433  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.433441  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:31.433448  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:31.433506  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:31.457935  546345 cri.go:89] found id: ""
	I1202 22:33:31.457968  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.457976  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:31.457983  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:31.458053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:31.481621  546345 cri.go:89] found id: ""
	I1202 22:33:31.481694  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.481704  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:31.481711  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:31.481781  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:31.505764  546345 cri.go:89] found id: ""
	I1202 22:33:31.505789  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.505799  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:31.505805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:31.505864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:31.530522  546345 cri.go:89] found id: ""
	I1202 22:33:31.530557  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.530565  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:31.530572  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:31.530639  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:31.558641  546345 cri.go:89] found id: ""
	I1202 22:33:31.558706  546345 logs.go:282] 0 containers: []
	W1202 22:33:31.558720  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:31.558731  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:31.558747  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:31.614675  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:31.614707  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:31.630252  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:31.630279  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:31.695335  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:31.687643   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.688201   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.689779   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.690376   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:31.692067   11631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:31.695359  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:31.695372  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:31.719979  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:31.720013  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:34.252356  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:34.264856  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:34.264924  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:34.303387  546345 cri.go:89] found id: ""
	I1202 22:33:34.303422  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.303437  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:34.303445  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:34.303502  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:34.377615  546345 cri.go:89] found id: ""
	I1202 22:33:34.377643  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.377665  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:34.377673  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:34.377750  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:34.409336  546345 cri.go:89] found id: ""
	I1202 22:33:34.409359  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.409367  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:34.409374  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:34.409433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:34.434153  546345 cri.go:89] found id: ""
	I1202 22:33:34.434175  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.434184  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:34.434190  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:34.434250  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:34.459524  546345 cri.go:89] found id: ""
	I1202 22:33:34.459549  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.459558  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:34.459565  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:34.459622  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:34.487835  546345 cri.go:89] found id: ""
	I1202 22:33:34.487862  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.487871  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:34.487878  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:34.487939  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:34.511616  546345 cri.go:89] found id: ""
	I1202 22:33:34.511638  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.511647  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:34.511654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:34.511712  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:34.539284  546345 cri.go:89] found id: ""
	I1202 22:33:34.539307  546345 logs.go:282] 0 containers: []
	W1202 22:33:34.539315  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:34.539324  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:34.539335  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:34.594370  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:34.594404  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:34.610176  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:34.610203  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:34.674945  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:34.667881   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.668374   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.669938   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.670382   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:34.671879   11743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:34.674968  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:34.674980  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:34.699820  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:34.699855  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:37.235245  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:37.245512  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:37.245580  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:37.270720  546345 cri.go:89] found id: ""
	I1202 22:33:37.270743  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.270751  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:37.270757  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:37.270818  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:37.317208  546345 cri.go:89] found id: ""
	I1202 22:33:37.317236  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.317244  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:37.317250  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:37.317357  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:37.381241  546345 cri.go:89] found id: ""
	I1202 22:33:37.381304  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.381319  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:37.381331  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:37.381391  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:37.406579  546345 cri.go:89] found id: ""
	I1202 22:33:37.406604  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.406613  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:37.406620  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:37.406676  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:37.431035  546345 cri.go:89] found id: ""
	I1202 22:33:37.431061  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.431071  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:37.431078  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:37.431170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:37.455450  546345 cri.go:89] found id: ""
	I1202 22:33:37.455476  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.455485  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:37.455491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:37.455549  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:37.479696  546345 cri.go:89] found id: ""
	I1202 22:33:37.479763  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.479784  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:37.479791  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:37.479864  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:37.504424  546345 cri.go:89] found id: ""
	I1202 22:33:37.504449  546345 logs.go:282] 0 containers: []
	W1202 22:33:37.504465  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:37.504475  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:37.504486  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:37.562929  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:37.562965  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:37.578720  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:37.578749  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:37.643738  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:37.635957   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.636680   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638363   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.638894   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:37.640533   11856 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:37.643758  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:37.643770  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:37.669355  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:37.669389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.197629  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:40.209725  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:40.209798  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:40.235226  546345 cri.go:89] found id: ""
	I1202 22:33:40.235249  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.235258  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:40.235265  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:40.235323  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:40.264913  546345 cri.go:89] found id: ""
	I1202 22:33:40.264938  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.264948  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:40.264955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:40.265014  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:40.292266  546345 cri.go:89] found id: ""
	I1202 22:33:40.292293  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.292302  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:40.292309  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:40.292366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:40.328677  546345 cri.go:89] found id: ""
	I1202 22:33:40.328703  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.328712  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:40.328718  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:40.328779  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:40.372520  546345 cri.go:89] found id: ""
	I1202 22:33:40.372553  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.372562  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:40.372570  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:40.372637  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:40.401860  546345 cri.go:89] found id: ""
	I1202 22:33:40.401896  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.401906  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:40.401913  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:40.401981  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:40.426706  546345 cri.go:89] found id: ""
	I1202 22:33:40.426774  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.426790  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:40.426797  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:40.426871  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:40.450845  546345 cri.go:89] found id: ""
	I1202 22:33:40.450873  546345 logs.go:282] 0 containers: []
	W1202 22:33:40.450882  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:40.450892  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:40.450921  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:40.466330  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:40.466359  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:40.530421  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:40.522152   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.522737   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524454   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.524953   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:40.526601   11966 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:40.530440  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:40.530471  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:40.557935  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:40.557971  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:40.589359  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:40.589413  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.149757  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:43.160459  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:43.160531  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:43.185860  546345 cri.go:89] found id: ""
	I1202 22:33:43.185885  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.185893  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:43.185900  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:43.185959  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:43.213745  546345 cri.go:89] found id: ""
	I1202 22:33:43.213771  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.213782  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:43.213788  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:43.213845  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:43.238763  546345 cri.go:89] found id: ""
	I1202 22:33:43.238788  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.238796  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:43.238805  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:43.238865  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:43.263259  546345 cri.go:89] found id: ""
	I1202 22:33:43.263285  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.263294  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:43.263301  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:43.263362  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:43.287780  546345 cri.go:89] found id: ""
	I1202 22:33:43.287804  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.287812  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:43.287818  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:43.287901  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:43.333797  546345 cri.go:89] found id: ""
	I1202 22:33:43.333819  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.333827  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:43.333833  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:43.333891  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:43.379712  546345 cri.go:89] found id: ""
	I1202 22:33:43.379734  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.379743  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:43.379749  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:43.379808  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:43.415160  546345 cri.go:89] found id: ""
	I1202 22:33:43.415240  546345 logs.go:282] 0 containers: []
	W1202 22:33:43.415264  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:43.415282  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:43.415306  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:43.442448  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:43.442475  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:43.497169  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:43.497207  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:43.513334  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:43.513370  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:43.577650  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:43.569606   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.570071   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.571853   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.572346   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:43.574036   12091 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:43.577691  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:43.577704  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.104276  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:46.114696  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:46.114770  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:46.143775  546345 cri.go:89] found id: ""
	I1202 22:33:46.143798  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.143806  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:46.143813  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:46.143872  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:46.168484  546345 cri.go:89] found id: ""
	I1202 22:33:46.168508  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.168517  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:46.168527  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:46.168585  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:46.195213  546345 cri.go:89] found id: ""
	I1202 22:33:46.195236  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.195244  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:46.195251  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:46.195316  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:46.218803  546345 cri.go:89] found id: ""
	I1202 22:33:46.218825  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.218833  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:46.218840  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:46.218902  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:46.242627  546345 cri.go:89] found id: ""
	I1202 22:33:46.242649  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.242657  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:46.242664  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:46.242735  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:46.268270  546345 cri.go:89] found id: ""
	I1202 22:33:46.268299  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.268314  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:46.268322  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:46.268398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:46.303449  546345 cri.go:89] found id: ""
	I1202 22:33:46.303476  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.303484  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:46.303491  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:46.303547  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:46.355851  546345 cri.go:89] found id: ""
	I1202 22:33:46.355877  546345 logs.go:282] 0 containers: []
	W1202 22:33:46.355886  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:46.355895  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:46.355906  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:46.372396  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:46.372426  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:46.448683  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:46.440678   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.441128   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.442893   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.443519   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:46.445111   12191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:46.448707  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:46.448721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:46.472236  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:46.472269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:46.501830  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:46.501857  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.060676  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:49.071150  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:49.071224  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:49.095927  546345 cri.go:89] found id: ""
	I1202 22:33:49.095949  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.095963  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:49.095970  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:49.096027  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:49.121814  546345 cri.go:89] found id: ""
	I1202 22:33:49.121837  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.121846  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:49.121853  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:49.121911  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:49.150554  546345 cri.go:89] found id: ""
	I1202 22:33:49.150582  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.150590  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:49.150596  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:49.150660  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:49.174636  546345 cri.go:89] found id: ""
	I1202 22:33:49.174660  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.174668  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:49.174675  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:49.174757  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:49.198993  546345 cri.go:89] found id: ""
	I1202 22:33:49.199019  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.199028  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:49.199035  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:49.199122  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:49.237206  546345 cri.go:89] found id: ""
	I1202 22:33:49.237280  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.237304  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:49.237327  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:49.237412  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:49.262326  546345 cri.go:89] found id: ""
	I1202 22:33:49.262395  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.262418  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:49.262437  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:49.262508  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:49.287127  546345 cri.go:89] found id: ""
	I1202 22:33:49.287192  546345 logs.go:282] 0 containers: []
	W1202 22:33:49.287215  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:49.287239  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:49.287269  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:49.365279  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:49.365438  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:49.383138  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:49.383164  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:49.454034  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:49.446536   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.447192   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.448807   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.449446   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:49.450982   12311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:49.454054  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:49.454066  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:49.478949  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:49.478982  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.007120  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:52.018354  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:52.018431  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:52.048418  546345 cri.go:89] found id: ""
	I1202 22:33:52.048502  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.048527  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:52.048554  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:52.048670  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:52.075756  546345 cri.go:89] found id: ""
	I1202 22:33:52.075795  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.075804  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:52.075811  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:52.075875  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:52.102101  546345 cri.go:89] found id: ""
	I1202 22:33:52.102128  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.102138  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:52.102145  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:52.102213  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:52.127350  546345 cri.go:89] found id: ""
	I1202 22:33:52.127375  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.127390  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:52.127397  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:52.127461  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:52.152298  546345 cri.go:89] found id: ""
	I1202 22:33:52.152325  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.152334  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:52.152340  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:52.152398  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:52.176927  546345 cri.go:89] found id: ""
	I1202 22:33:52.176952  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.176960  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:52.176966  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:52.177023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:52.203976  546345 cri.go:89] found id: ""
	I1202 22:33:52.204003  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.204012  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:52.204018  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:52.204077  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:52.229381  546345 cri.go:89] found id: ""
	I1202 22:33:52.229408  546345 logs.go:282] 0 containers: []
	W1202 22:33:52.229416  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:52.229425  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:52.229443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:52.292540  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:52.283085   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.283828   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285448   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.285967   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:52.287627   12413 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:52.292561  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:52.292574  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:52.324946  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:52.325102  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:52.369542  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:52.369568  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:52.436122  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:52.436159  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:54.953633  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:54.963990  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:54.964062  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:54.991840  546345 cri.go:89] found id: ""
	I1202 22:33:54.991865  546345 logs.go:282] 0 containers: []
	W1202 22:33:54.991873  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:54.991880  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:54.991937  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:55.024217  546345 cri.go:89] found id: ""
	I1202 22:33:55.024241  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.024250  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:55.024258  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:55.024320  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:55.048985  546345 cri.go:89] found id: ""
	I1202 22:33:55.049007  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.049015  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:55.049021  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:55.049086  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:55.073787  546345 cri.go:89] found id: ""
	I1202 22:33:55.073809  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.073818  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:55.073825  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:55.073887  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:55.097827  546345 cri.go:89] found id: ""
	I1202 22:33:55.097849  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.097857  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:55.097864  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:55.097929  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:55.127096  546345 cri.go:89] found id: ""
	I1202 22:33:55.127119  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.127127  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:55.127135  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:55.127247  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:55.155895  546345 cri.go:89] found id: ""
	I1202 22:33:55.155920  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.155929  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:55.155936  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:55.155998  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:55.184917  546345 cri.go:89] found id: ""
	I1202 22:33:55.184943  546345 logs.go:282] 0 containers: []
	W1202 22:33:55.184951  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:55.184960  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:55.184973  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:55.245409  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:55.238197   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.238600   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240244   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.240779   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:55.242395   12525 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:55.245430  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:55.245443  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:55.269272  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:55.269303  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:33:55.324186  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:55.324256  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:55.407948  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:55.408021  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:57.927547  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:33:57.938134  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:33:57.938208  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:33:57.966983  546345 cri.go:89] found id: ""
	I1202 22:33:57.967016  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.967025  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:33:57.967031  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:33:57.967090  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:33:57.990911  546345 cri.go:89] found id: ""
	I1202 22:33:57.990934  546345 logs.go:282] 0 containers: []
	W1202 22:33:57.990942  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:33:57.990949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:33:57.991006  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:33:58.027051  546345 cri.go:89] found id: ""
	I1202 22:33:58.027076  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.027085  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:33:58.027091  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:33:58.027170  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:33:58.052767  546345 cri.go:89] found id: ""
	I1202 22:33:58.052791  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.052801  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:33:58.052808  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:33:58.052866  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:33:58.077589  546345 cri.go:89] found id: ""
	I1202 22:33:58.077616  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.077626  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:33:58.077634  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:33:58.077736  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:33:58.102352  546345 cri.go:89] found id: ""
	I1202 22:33:58.102377  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.102385  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:33:58.102394  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:33:58.102453  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:33:58.127151  546345 cri.go:89] found id: ""
	I1202 22:33:58.127174  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.127183  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:33:58.127203  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:33:58.127264  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:33:58.153068  546345 cri.go:89] found id: ""
	I1202 22:33:58.153097  546345 logs.go:282] 0 containers: []
	W1202 22:33:58.153106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:33:58.153116  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:33:58.153128  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:33:58.207341  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:33:58.207375  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:33:58.223908  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:33:58.223993  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:33:58.303303  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:33:58.282435   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.282890   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.284669   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.285085   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:33:58.286613   12642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:33:58.303374  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:33:58.303401  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:33:58.339284  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:33:58.339358  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:00.884684  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:00.894955  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:00.895043  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:00.919607  546345 cri.go:89] found id: ""
	I1202 22:34:00.919638  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.919648  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:00.919655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:00.919714  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:00.943845  546345 cri.go:89] found id: ""
	I1202 22:34:00.943869  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.943877  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:00.943883  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:00.943942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:00.969291  546345 cri.go:89] found id: ""
	I1202 22:34:00.969316  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.969325  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:00.969332  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:00.969387  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:00.998170  546345 cri.go:89] found id: ""
	I1202 22:34:00.998194  546345 logs.go:282] 0 containers: []
	W1202 22:34:00.998203  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:00.998210  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:00.998267  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:01.028082  546345 cri.go:89] found id: ""
	I1202 22:34:01.028108  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.028118  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:01.028125  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:01.028182  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:01.052163  546345 cri.go:89] found id: ""
	I1202 22:34:01.052190  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.052198  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:01.052204  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:01.052261  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:01.079605  546345 cri.go:89] found id: ""
	I1202 22:34:01.079638  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.079648  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:01.079655  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:01.079727  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:01.104672  546345 cri.go:89] found id: ""
	I1202 22:34:01.104697  546345 logs.go:282] 0 containers: []
	W1202 22:34:01.104705  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:01.104714  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:01.104727  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:01.168637  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:01.168689  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:01.186088  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:01.186120  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:01.254373  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:01.244820   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.245479   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247310   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.247977   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:01.250513   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:01.254405  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:01.254421  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:01.279534  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:01.279570  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:03.844056  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:03.854485  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:03.854559  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:03.883518  546345 cri.go:89] found id: ""
	I1202 22:34:03.883539  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.883547  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:03.883555  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:03.883616  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:03.907609  546345 cri.go:89] found id: ""
	I1202 22:34:03.907634  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.907643  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:03.907650  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:03.907708  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:03.931661  546345 cri.go:89] found id: ""
	I1202 22:34:03.931686  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.931694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:03.931701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:03.931762  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:03.956212  546345 cri.go:89] found id: ""
	I1202 22:34:03.956236  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.956245  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:03.956252  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:03.956310  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:03.982858  546345 cri.go:89] found id: ""
	I1202 22:34:03.982882  546345 logs.go:282] 0 containers: []
	W1202 22:34:03.982890  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:03.982899  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:03.982955  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:04.008609  546345 cri.go:89] found id: ""
	I1202 22:34:04.008637  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.008646  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:04.008654  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:04.008718  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:04.034395  546345 cri.go:89] found id: ""
	I1202 22:34:04.034426  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.034436  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:04.034443  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:04.034503  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:04.059450  546345 cri.go:89] found id: ""
	I1202 22:34:04.059474  546345 logs.go:282] 0 containers: []
	W1202 22:34:04.059482  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:04.059492  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:04.059503  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:04.116204  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:04.116237  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:04.131753  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:04.131779  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:04.195398  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:04.187783   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.188327   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.189976   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.190535   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:04.192070   12867 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:04.195417  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:04.195431  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:04.220265  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:04.220302  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:06.748017  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:06.758416  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:06.758487  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:06.786850  546345 cri.go:89] found id: ""
	I1202 22:34:06.786877  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.786886  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:06.786893  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:06.786958  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:06.811248  546345 cri.go:89] found id: ""
	I1202 22:34:06.811274  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.811283  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:06.811290  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:06.811352  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:06.835885  546345 cri.go:89] found id: ""
	I1202 22:34:06.835911  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.835920  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:06.835927  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:06.835986  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:06.861031  546345 cri.go:89] found id: ""
	I1202 22:34:06.861057  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.861066  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:06.861076  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:06.861137  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:06.885492  546345 cri.go:89] found id: ""
	I1202 22:34:06.885518  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.885526  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:06.885533  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:06.885621  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:06.911207  546345 cri.go:89] found id: ""
	I1202 22:34:06.911233  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.911242  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:06.911249  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:06.911307  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:06.936761  546345 cri.go:89] found id: ""
	I1202 22:34:06.936786  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.936794  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:06.936801  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:06.936858  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:06.961200  546345 cri.go:89] found id: ""
	I1202 22:34:06.961225  546345 logs.go:282] 0 containers: []
	W1202 22:34:06.961233  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:06.961242  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:06.961253  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:07.017396  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:07.017432  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:07.033140  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:07.033220  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:07.098724  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:07.091082   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.091775   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093263   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.093721   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:07.095156   12981 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:07.098749  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:07.098764  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:07.123278  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:07.123313  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:09.654822  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:09.666550  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:09.666631  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:09.696479  546345 cri.go:89] found id: ""
	I1202 22:34:09.696501  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.696510  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:09.696516  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:09.696573  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:09.720695  546345 cri.go:89] found id: ""
	I1202 22:34:09.720717  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.720725  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:09.720732  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:09.720789  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:09.743340  546345 cri.go:89] found id: ""
	I1202 22:34:09.743366  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.743374  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:09.743381  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:09.743441  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:09.771827  546345 cri.go:89] found id: ""
	I1202 22:34:09.771851  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.771859  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:09.771866  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:09.771942  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:09.800440  546345 cri.go:89] found id: ""
	I1202 22:34:09.800511  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.800522  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:09.800529  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:09.800599  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:09.827898  546345 cri.go:89] found id: ""
	I1202 22:34:09.827933  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.827942  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:09.827949  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:09.828053  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:09.851874  546345 cri.go:89] found id: ""
	I1202 22:34:09.851909  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.851918  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:09.851925  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:09.852023  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:09.876063  546345 cri.go:89] found id: ""
	I1202 22:34:09.876098  546345 logs.go:282] 0 containers: []
	W1202 22:34:09.876106  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:09.876136  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:09.876157  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:09.931102  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:09.931140  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:09.947006  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:09.947033  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:10.016167  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:10.007437   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.008283   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010218   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.010846   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:10.012661   13095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:10.016189  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:10.016202  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:10.042713  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:10.042746  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.574841  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:12.602704  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:12.602776  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:12.630259  546345 cri.go:89] found id: ""
	I1202 22:34:12.630283  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.630291  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:12.630298  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:12.630356  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:12.653540  546345 cri.go:89] found id: ""
	I1202 22:34:12.653571  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.653580  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:12.653587  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:12.653726  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:12.678660  546345 cri.go:89] found id: ""
	I1202 22:34:12.678685  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.678694  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:12.678701  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:12.678761  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:12.702118  546345 cri.go:89] found id: ""
	I1202 22:34:12.702147  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.702155  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:12.702162  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:12.702262  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:12.729590  546345 cri.go:89] found id: ""
	I1202 22:34:12.729615  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.729624  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:12.729631  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:12.729713  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:12.755560  546345 cri.go:89] found id: ""
	I1202 22:34:12.755586  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.755594  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:12.755601  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:12.755656  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:12.788269  546345 cri.go:89] found id: ""
	I1202 22:34:12.788293  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.788302  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:12.788308  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:12.788366  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:12.812214  546345 cri.go:89] found id: ""
	I1202 22:34:12.812239  546345 logs.go:282] 0 containers: []
	W1202 22:34:12.812248  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:12.812257  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:12.812268  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:12.841941  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:12.841966  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:12.896188  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:12.896219  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:12.911694  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:12.911721  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:12.975342  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:12.967919   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.968476   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970099   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.970747   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:12.972209   13223 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:12.975377  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:12.975389  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.502887  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:15.513338  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1202 22:34:15.513418  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 22:34:15.536875  546345 cri.go:89] found id: ""
	I1202 22:34:15.536897  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.536905  546345 logs.go:284] No container was found matching "kube-apiserver"
	I1202 22:34:15.536911  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1202 22:34:15.536970  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 22:34:15.573309  546345 cri.go:89] found id: ""
	I1202 22:34:15.573335  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.573360  546345 logs.go:284] No container was found matching "etcd"
	I1202 22:34:15.573368  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1202 22:34:15.573433  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 22:34:15.623126  546345 cri.go:89] found id: ""
	I1202 22:34:15.623149  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.623157  546345 logs.go:284] No container was found matching "coredns"
	I1202 22:34:15.623164  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1202 22:34:15.623221  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 22:34:15.657458  546345 cri.go:89] found id: ""
	I1202 22:34:15.657484  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.657493  546345 logs.go:284] No container was found matching "kube-scheduler"
	I1202 22:34:15.657500  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1202 22:34:15.657568  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 22:34:15.681354  546345 cri.go:89] found id: ""
	I1202 22:34:15.681380  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.681389  546345 logs.go:284] No container was found matching "kube-proxy"
	I1202 22:34:15.681395  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 22:34:15.681456  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 22:34:15.705775  546345 cri.go:89] found id: ""
	I1202 22:34:15.705848  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.705874  546345 logs.go:284] No container was found matching "kube-controller-manager"
	I1202 22:34:15.705894  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1202 22:34:15.705971  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 22:34:15.731425  546345 cri.go:89] found id: ""
	I1202 22:34:15.731448  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.731457  546345 logs.go:284] No container was found matching "kindnet"
	I1202 22:34:15.731464  546345 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1202 22:34:15.731521  546345 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1202 22:34:15.755658  546345 cri.go:89] found id: ""
	I1202 22:34:15.755682  546345 logs.go:282] 0 containers: []
	W1202 22:34:15.755690  546345 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1202 22:34:15.755699  546345 logs.go:123] Gathering logs for kubelet ...
	I1202 22:34:15.755711  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 22:34:15.811079  546345 logs.go:123] Gathering logs for dmesg ...
	I1202 22:34:15.811113  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 22:34:15.827246  546345 logs.go:123] Gathering logs for describe nodes ...
	I1202 22:34:15.827272  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 22:34:15.889878  546345 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1202 22:34:15.882005   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.882392   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884118   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.884767   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:15.886280   13324 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 22:34:15.889899  546345 logs.go:123] Gathering logs for containerd ...
	I1202 22:34:15.889912  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1202 22:34:15.915317  546345 logs.go:123] Gathering logs for container status ...
	I1202 22:34:15.915350  546345 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 22:34:18.445059  546345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 22:34:18.458773  546345 out.go:203] 
	W1202 22:34:18.461733  546345 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1202 22:34:18.461774  546345 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1202 22:34:18.461784  546345 out.go:285] * Related issues:
	W1202 22:34:18.461797  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1202 22:34:18.461818  546345 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1202 22:34:18.464650  546345 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311616018Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311627086Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311638754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311647836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311661645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311673271Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311695220Z" level=info msg="runtime interface created"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311701619Z" level=info msg="created NRI interface"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311714862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311747001Z" level=info msg="Connect containerd service"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.311985262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.313086719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326105435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326173330Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326202376Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.326244262Z" level=info msg="Start recovering state"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346481012Z" level=info msg="Start event monitor"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346541925Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346551911Z" level=info msg="Start streaming server"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346565096Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346574146Z" level=info msg="runtime interface starting up..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346580505Z" level=info msg="starting plugins..."
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.346733550Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:28:16 newest-cni-250247 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:28:16 newest-cni-250247 containerd[556]: time="2025-12-02T22:28:16.348731317Z" level=info msg="containerd successfully booted in 0.056907s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:34:31.298357   13988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:31.299089   13988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:31.300819   13988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:31.301396   13988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:34:31.303095   13988 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:34:31 up  4:16,  0 user,  load average: 1.73, 1.03, 1.12
	Linux newest-cni-250247 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:34:28 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:29 newest-cni-250247 kubelet[13851]: E1202 22:34:29.107908   13851 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:29 newest-cni-250247 kubelet[13887]: E1202 22:34:29.846633   13887 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:29 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:30 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 02 22:34:30 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:30 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:30 newest-cni-250247 kubelet[13892]: E1202 22:34:30.619906   13892 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:30 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:30 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:34:31 newest-cni-250247 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 02 22:34:31 newest-cni-250247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:31 newest-cni-250247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:34:31 newest-cni-250247 kubelet[13992]: E1202 22:34:31.346357   13992 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:34:31 newest-cni-250247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:34:31 newest-cni-250247 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250247 -n newest-cni-250247: exit status 2 (388.385314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "newest-cni-250247" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (255.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:39:13.493284  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:39:44.122646  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:40:55.827149  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:55.833555  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:55.845059  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:55.866540  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:55.907932  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:40:55.990243  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:56.151861  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:40:56.473580  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:40:57.115315  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:00.959467  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:06.081178  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:16.323343  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:17.026910  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/default-k8s-diff-port-444714/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:28.654537  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:41:36.805222  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:42:17.767014  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:42:36.513394  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:42:43.439189  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.445553  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.456919  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.478307  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.519700  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.601258  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:43.762561  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:42:44.084695  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:42:44.726052  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1202 22:42:46.008036  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 2 (363.00678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-904303 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-904303 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.691µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-904303 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-904303
helpers_test.go:243: (dbg) docker inspect no-preload-904303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	        "Created": "2025-12-02T22:12:48.891111789Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 539728,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-02T22:23:22.642400086Z",
	            "FinishedAt": "2025-12-02T22:23:21.316417439Z"
	        },
	        "Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
	        "ResolvConfPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hostname",
	        "HostsPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/hosts",
	        "LogPath": "/var/lib/docker/containers/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436/419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436-json.log",
	        "Name": "/no-preload-904303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-904303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-904303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "419e3dce7c5db06c30b3d647bf2f57e4513c46ef3fe44c7b3dc7daa8055ba436",
	                "LowerDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/merged",
	                "UpperDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/diff",
	                "WorkDir": "/var/lib/docker/overlay2/242e367194ea7c190613cfcedc3b43df3bfd078ac2382cd3982120f6d43a1021/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-904303",
	                "Source": "/var/lib/docker/volumes/no-preload-904303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-904303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-904303",
	                "name.minikube.sigs.k8s.io": "no-preload-904303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b2c027d5072096e798c0b710c59b479b1cd1269246af142ef5e7ac6eb2231d21",
	            "SandboxKey": "/var/run/docker/netns/b2c027d50720",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-904303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:71:1d:c1:74:1c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bd7fe0193300ea97495798d9ee6ddb57b917596827758698a61d4a79d61723bf",
	                    "EndpointID": "d640ee5b3f22cc33822a769221598d10c33902fafb82f4150c227e00cda4eee4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-904303",
	                        "419e3dce7c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 2 (383.202348ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-904303 logs -n 25
E1202 22:42:48.576792  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl status kubelet --all --full --no-pager                                                           │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl cat kubelet --no-pager                                                                           │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo journalctl -xeu kubelet --all --full --no-pager                                                            │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /etc/kubernetes/kubelet.conf                                                                           │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /var/lib/kubelet/config.yaml                                                                           │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl status docker --all --full --no-pager                                                            │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl cat docker --no-pager                                                                            │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /etc/docker/daemon.json                                                                                │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo docker system info                                                                                         │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl status cri-docker --all --full --no-pager                                                        │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl cat cri-docker --no-pager                                                                        │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                   │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /usr/lib/systemd/system/cri-docker.service                                                             │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cri-dockerd --version                                                                                      │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl status containerd --all --full --no-pager                                                        │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl cat containerd --no-pager                                                                        │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /lib/systemd/system/containerd.service                                                                 │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo cat /etc/containerd/config.toml                                                                            │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo containerd config dump                                                                                     │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl status crio --all --full --no-pager                                                              │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	│ ssh     │ -p enable-default-cni-577910 sudo systemctl cat crio --no-pager                                                                              │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                    │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ ssh     │ -p enable-default-cni-577910 sudo crio config                                                                                                │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ delete  │ -p enable-default-cni-577910                                                                                                                 │ enable-default-cni-577910 │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │ 02 Dec 25 22:42 UTC │
	│ start   │ -p bridge-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd │ bridge-577910             │ jenkins │ v1.37.0 │ 02 Dec 25 22:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 22:42:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 22:42:34.201831  599680 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:42:34.202003  599680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:42:34.202034  599680 out.go:374] Setting ErrFile to fd 2...
	I1202 22:42:34.202055  599680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:42:34.202312  599680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:42:34.202727  599680 out.go:368] Setting JSON to false
	I1202 22:42:34.203584  599680 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15893,"bootTime":1764699462,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:42:34.203676  599680 start.go:143] virtualization:  
	I1202 22:42:34.205628  599680 out.go:179] * [bridge-577910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:42:34.207264  599680 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:42:34.207381  599680 notify.go:221] Checking for updates...
	I1202 22:42:34.210874  599680 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:42:34.213034  599680 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:42:34.214544  599680 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:42:34.215983  599680 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:42:34.217329  599680 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:42:34.219119  599680 config.go:182] Loaded profile config "no-preload-904303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:42:34.219222  599680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:42:34.243217  599680 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:42:34.243342  599680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:42:34.322317  599680 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:42:34.312937289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:42:34.322418  599680 docker.go:319] overlay module found
	I1202 22:42:34.325502  599680 out.go:179] * Using the docker driver based on user configuration
	I1202 22:42:34.327385  599680 start.go:309] selected driver: docker
	I1202 22:42:34.327408  599680 start.go:927] validating driver "docker" against <nil>
	I1202 22:42:34.327423  599680 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:42:34.328116  599680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:42:34.402071  599680 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:42:34.392263741 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:42:34.402240  599680 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 22:42:34.402464  599680 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 22:42:34.403956  599680 out.go:179] * Using Docker driver with root privileges
	I1202 22:42:34.405209  599680 cni.go:84] Creating CNI manager for "bridge"
	I1202 22:42:34.405224  599680 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 22:42:34.405305  599680 start.go:353] cluster config:
	{Name:bridge-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:42:34.407619  599680 out.go:179] * Starting "bridge-577910" primary control-plane node in "bridge-577910" cluster
	I1202 22:42:34.409043  599680 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 22:42:34.410770  599680 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
	I1202 22:42:34.412402  599680 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:42:34.412457  599680 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1202 22:42:34.412477  599680 cache.go:65] Caching tarball of preloaded images
	I1202 22:42:34.412475  599680 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 22:42:34.412557  599680 preload.go:238] Found /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1202 22:42:34.412567  599680 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on containerd
	I1202 22:42:34.412668  599680 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/config.json ...
	I1202 22:42:34.412684  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/config.json: {Name:mkaf16cb184f965af524f6164b383c683c225199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:34.431556  599680 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
	I1202 22:42:34.431579  599680 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
	I1202 22:42:34.431600  599680 cache.go:243] Successfully downloaded all kic artifacts
	I1202 22:42:34.431632  599680 start.go:360] acquireMachinesLock for bridge-577910: {Name:mk749e738056c5ebe8e9d954d638a6dc8d2b7b58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 22:42:34.431735  599680 start.go:364] duration metric: took 82.722µs to acquireMachinesLock for "bridge-577910"
	I1202 22:42:34.431770  599680 start.go:93] Provisioning new machine with config: &{Name:bridge-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-577910 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1202 22:42:34.431838  599680 start.go:125] createHost starting for "" (driver="docker")
	I1202 22:42:34.433442  599680 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1202 22:42:34.433679  599680 start.go:159] libmachine.API.Create for "bridge-577910" (driver="docker")
	I1202 22:42:34.433710  599680 client.go:173] LocalClient.Create starting
	I1202 22:42:34.433811  599680 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
	I1202 22:42:34.433861  599680 main.go:143] libmachine: Decoding PEM data...
	I1202 22:42:34.433879  599680 main.go:143] libmachine: Parsing certificate...
	I1202 22:42:34.433948  599680 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
	I1202 22:42:34.433973  599680 main.go:143] libmachine: Decoding PEM data...
	I1202 22:42:34.433985  599680 main.go:143] libmachine: Parsing certificate...
	I1202 22:42:34.434367  599680 cli_runner.go:164] Run: docker network inspect bridge-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 22:42:34.450481  599680 cli_runner.go:211] docker network inspect bridge-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 22:42:34.450582  599680 network_create.go:284] running [docker network inspect bridge-577910] to gather additional debugging logs...
	I1202 22:42:34.450605  599680 cli_runner.go:164] Run: docker network inspect bridge-577910
	W1202 22:42:34.465611  599680 cli_runner.go:211] docker network inspect bridge-577910 returned with exit code 1
	I1202 22:42:34.465642  599680 network_create.go:287] error running [docker network inspect bridge-577910]: docker network inspect bridge-577910: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network bridge-577910 not found
	I1202 22:42:34.465699  599680 network_create.go:289] output of [docker network inspect bridge-577910]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network bridge-577910 not found
	
	** /stderr **
	I1202 22:42:34.465791  599680 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:42:34.481979  599680 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
	I1202 22:42:34.482338  599680 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-11c615b6a402 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:e5:fa:65:65:bf} reservation:<nil>}
	I1202 22:42:34.482697  599680 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-efeb1d3ec8c6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:0d:78:3a:6e:22} reservation:<nil>}
	I1202 22:42:34.483021  599680 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bd7fe0193300 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:46:f1:c8:59:e0} reservation:<nil>}
	I1202 22:42:34.483510  599680 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e5b30}
	I1202 22:42:34.483546  599680 network_create.go:124] attempt to create docker network bridge-577910 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1202 22:42:34.483624  599680 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-577910 bridge-577910
	I1202 22:42:34.534711  599680 network_create.go:108] docker network bridge-577910 192.168.85.0/24 created
	I1202 22:42:34.534740  599680 kic.go:121] calculated static IP "192.168.85.2" for the "bridge-577910" container
	I1202 22:42:34.534811  599680 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 22:42:34.550536  599680 cli_runner.go:164] Run: docker volume create bridge-577910 --label name.minikube.sigs.k8s.io=bridge-577910 --label created_by.minikube.sigs.k8s.io=true
	I1202 22:42:34.566494  599680 oci.go:103] Successfully created a docker volume bridge-577910
	I1202 22:42:34.566580  599680 cli_runner.go:164] Run: docker run --rm --name bridge-577910-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-577910 --entrypoint /usr/bin/test -v bridge-577910:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
	I1202 22:42:34.991249  599680 oci.go:107] Successfully prepared a docker volume bridge-577910
	I1202 22:42:34.991307  599680 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:42:34.991317  599680 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 22:42:34.991405  599680 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-577910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 22:42:38.946456  599680 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v bridge-577910:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -I lz4 -xf /preloaded.tar -C /extractDir: (3.955008725s)
	I1202 22:42:38.946488  599680 kic.go:203] duration metric: took 3.955167489s to extract preloaded images to volume ...
	W1202 22:42:38.946621  599680 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 22:42:38.946736  599680 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 22:42:39.016675  599680 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname bridge-577910 --name bridge-577910 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-577910 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=bridge-577910 --network bridge-577910 --ip 192.168.85.2 --volume bridge-577910:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
	I1202 22:42:39.309077  599680 cli_runner.go:164] Run: docker container inspect bridge-577910 --format={{.State.Running}}
	I1202 22:42:39.336735  599680 cli_runner.go:164] Run: docker container inspect bridge-577910 --format={{.State.Status}}
	I1202 22:42:39.358129  599680 cli_runner.go:164] Run: docker exec bridge-577910 stat /var/lib/dpkg/alternatives/iptables
	I1202 22:42:39.406568  599680 oci.go:144] the created container "bridge-577910" has a running status.
	I1202 22:42:39.406599  599680 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa...
	I1202 22:42:39.978688  599680 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 22:42:40.010727  599680 cli_runner.go:164] Run: docker container inspect bridge-577910 --format={{.State.Status}}
	I1202 22:42:40.037724  599680 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 22:42:40.037747  599680 kic_runner.go:114] Args: [docker exec --privileged bridge-577910 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 22:42:40.099284  599680 cli_runner.go:164] Run: docker container inspect bridge-577910 --format={{.State.Status}}
	I1202 22:42:40.124164  599680 machine.go:94] provisionDockerMachine start ...
	I1202 22:42:40.124258  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:40.159658  599680 main.go:143] libmachine: Using SSH client type: native
	I1202 22:42:40.160015  599680 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1202 22:42:40.160030  599680 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 22:42:40.330089  599680 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-577910
	
	I1202 22:42:40.330111  599680 ubuntu.go:182] provisioning hostname "bridge-577910"
	I1202 22:42:40.330178  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:40.355815  599680 main.go:143] libmachine: Using SSH client type: native
	I1202 22:42:40.356135  599680 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1202 22:42:40.356156  599680 main.go:143] libmachine: About to run SSH command:
	sudo hostname bridge-577910 && echo "bridge-577910" | sudo tee /etc/hostname
	I1202 22:42:40.533529  599680 main.go:143] libmachine: SSH cmd err, output: <nil>: bridge-577910
	
	I1202 22:42:40.533616  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:40.552776  599680 main.go:143] libmachine: Using SSH client type: native
	I1202 22:42:40.553089  599680 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1202 22:42:40.553113  599680 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-577910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-577910/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-577910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 22:42:40.710135  599680 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 22:42:40.710236  599680 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
	I1202 22:42:40.710273  599680 ubuntu.go:190] setting up certificates
	I1202 22:42:40.710284  599680 provision.go:84] configureAuth start
	I1202 22:42:40.710356  599680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-577910
	I1202 22:42:40.728437  599680 provision.go:143] copyHostCerts
	I1202 22:42:40.728524  599680 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
	I1202 22:42:40.728538  599680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
	I1202 22:42:40.728627  599680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
	I1202 22:42:40.728755  599680 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
	I1202 22:42:40.728766  599680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
	I1202 22:42:40.728795  599680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
	I1202 22:42:40.728876  599680 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
	I1202 22:42:40.728888  599680 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
	I1202 22:42:40.728925  599680 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
	I1202 22:42:40.728995  599680 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.bridge-577910 san=[127.0.0.1 192.168.85.2 bridge-577910 localhost minikube]
	I1202 22:42:40.860269  599680 provision.go:177] copyRemoteCerts
	I1202 22:42:40.860341  599680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 22:42:40.860383  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:40.877323  599680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa Username:docker}
	I1202 22:42:40.985298  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 22:42:41.004994  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 22:42:41.022954  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 22:42:41.040011  599680 provision.go:87] duration metric: took 329.701166ms to configureAuth
	I1202 22:42:41.040037  599680 ubuntu.go:206] setting minikube options for container-runtime
	I1202 22:42:41.040231  599680 config.go:182] Loaded profile config "bridge-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 22:42:41.040240  599680 machine.go:97] duration metric: took 916.056554ms to provisionDockerMachine
	I1202 22:42:41.040248  599680 client.go:176] duration metric: took 6.606531252s to LocalClient.Create
	I1202 22:42:41.040262  599680 start.go:167] duration metric: took 6.606615327s to libmachine.API.Create "bridge-577910"
	I1202 22:42:41.040269  599680 start.go:293] postStartSetup for "bridge-577910" (driver="docker")
	I1202 22:42:41.040281  599680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 22:42:41.040334  599680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 22:42:41.040373  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:41.062937  599680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa Username:docker}
	I1202 22:42:41.170918  599680 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 22:42:41.174125  599680 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 22:42:41.174154  599680 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1202 22:42:41.174166  599680 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
	I1202 22:42:41.174222  599680 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
	I1202 22:42:41.174308  599680 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
	I1202 22:42:41.174415  599680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 22:42:41.181467  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:42:41.198415  599680 start.go:296] duration metric: took 158.131197ms for postStartSetup
	I1202 22:42:41.198775  599680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-577910
	I1202 22:42:41.215500  599680 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/config.json ...
	I1202 22:42:41.215773  599680 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 22:42:41.215823  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:41.232638  599680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa Username:docker}
	I1202 22:42:41.334886  599680 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 22:42:41.340082  599680 start.go:128] duration metric: took 6.908229002s to createHost
	I1202 22:42:41.340111  599680 start.go:83] releasing machines lock for "bridge-577910", held for 6.908363234s
	I1202 22:42:41.340203  599680 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" bridge-577910
	I1202 22:42:41.360254  599680 ssh_runner.go:195] Run: cat /version.json
	I1202 22:42:41.360332  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:41.360577  599680 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 22:42:41.360643  599680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-577910
	I1202 22:42:41.382612  599680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa Username:docker}
	I1202 22:42:41.386059  599680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/bridge-577910/id_rsa Username:docker}
	I1202 22:42:41.570636  599680 ssh_runner.go:195] Run: systemctl --version
	I1202 22:42:41.577129  599680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 22:42:41.581639  599680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 22:42:41.581785  599680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 22:42:41.612522  599680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1202 22:42:41.612592  599680 start.go:496] detecting cgroup driver to use...
	I1202 22:42:41.612640  599680 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 22:42:41.612721  599680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1202 22:42:41.627716  599680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1202 22:42:41.641314  599680 docker.go:218] disabling cri-docker service (if available) ...
	I1202 22:42:41.641391  599680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 22:42:41.659126  599680 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 22:42:41.678558  599680 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 22:42:41.796793  599680 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 22:42:41.932724  599680 docker.go:234] disabling docker service ...
	I1202 22:42:41.932848  599680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 22:42:41.954812  599680 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 22:42:41.967739  599680 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 22:42:42.099401  599680 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 22:42:42.259004  599680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 22:42:42.275446  599680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 22:42:42.294354  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1202 22:42:42.305946  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1202 22:42:42.316868  599680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1202 22:42:42.317020  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1202 22:42:42.327806  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:42:42.338913  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1202 22:42:42.350510  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1202 22:42:42.361363  599680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 22:42:42.370654  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1202 22:42:42.381010  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1202 22:42:42.390816  599680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1202 22:42:42.400746  599680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 22:42:42.409066  599680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 22:42:42.417160  599680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:42:42.544601  599680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1202 22:42:42.677518  599680 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1202 22:42:42.677634  599680 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1202 22:42:42.681447  599680 start.go:564] Will wait 60s for crictl version
	I1202 22:42:42.681553  599680 ssh_runner.go:195] Run: which crictl
	I1202 22:42:42.684985  599680 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1202 22:42:42.708630  599680 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1202 22:42:42.708748  599680 ssh_runner.go:195] Run: containerd --version
	I1202 22:42:42.732550  599680 ssh_runner.go:195] Run: containerd --version
	I1202 22:42:42.757193  599680 out.go:179] * Preparing Kubernetes v1.34.2 on containerd 2.1.5 ...
	I1202 22:42:42.760230  599680 cli_runner.go:164] Run: docker network inspect bridge-577910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 22:42:42.778788  599680 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1202 22:42:42.782597  599680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:42:42.794077  599680 kubeadm.go:884] updating cluster {Name:bridge-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 22:42:42.794202  599680 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 22:42:42.794270  599680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:42:42.818194  599680 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:42:42.818218  599680 containerd.go:534] Images already preloaded, skipping extraction
	I1202 22:42:42.818277  599680 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 22:42:42.842030  599680 containerd.go:627] all images are preloaded for containerd runtime.
	I1202 22:42:42.842054  599680 cache_images.go:86] Images are preloaded, skipping loading
	I1202 22:42:42.842062  599680 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 containerd true true} ...
	I1202 22:42:42.842150  599680 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-577910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:bridge-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1202 22:42:42.842217  599680 ssh_runner.go:195] Run: sudo crictl info
	I1202 22:42:42.866417  599680 cni.go:84] Creating CNI manager for "bridge"
	I1202 22:42:42.866458  599680 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 22:42:42.866487  599680 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-577910 NodeName:bridge-577910 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 22:42:42.866609  599680 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "bridge-577910"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 22:42:42.866680  599680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 22:42:42.874130  599680 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 22:42:42.874200  599680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 22:42:42.881369  599680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1202 22:42:42.893896  599680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 22:42:42.906528  599680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1202 22:42:42.919311  599680 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1202 22:42:42.923498  599680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 22:42:42.933086  599680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 22:42:43.037205  599680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 22:42:43.058080  599680 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910 for IP: 192.168.85.2
	I1202 22:42:43.058103  599680 certs.go:195] generating shared ca certs ...
	I1202 22:42:43.058123  599680 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:43.058270  599680 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
	I1202 22:42:43.058317  599680 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
	I1202 22:42:43.058327  599680 certs.go:257] generating profile certs ...
	I1202 22:42:43.058380  599680 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.key
	I1202 22:42:43.058397  599680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.crt with IP's: []
	I1202 22:42:43.418477  599680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.crt ...
	I1202 22:42:43.418512  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.crt: {Name:mk875275666ca477eba2dce7206fdf50d61a1532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:43.418749  599680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.key ...
	I1202 22:42:43.418765  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/client.key: {Name:mk04a28d72e221457d863ee10c20a23b52f9bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:43.418874  599680 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key.57e0ec37
	I1202 22:42:43.418893  599680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt.57e0ec37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1202 22:42:43.634854  599680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt.57e0ec37 ...
	I1202 22:42:43.634885  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt.57e0ec37: {Name:mk184aafcb6c4d85d97c99311fa7943cdbd3e262 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:43.635082  599680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key.57e0ec37 ...
	I1202 22:42:43.635098  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key.57e0ec37: {Name:mk1b0cb992e72b6f017d38298d366981f1901748 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:43.635179  599680 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt.57e0ec37 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt
	I1202 22:42:43.635306  599680 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key.57e0ec37 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key
	I1202 22:42:43.635365  599680 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.key
	I1202 22:42:43.635383  599680 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.crt with IP's: []
	I1202 22:42:44.144970  599680 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.crt ...
	I1202 22:42:44.145000  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.crt: {Name:mkbe31f9eec897487c961d44c463340e718e2b05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:44.145182  599680 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.key ...
	I1202 22:42:44.145197  599680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.key: {Name:mk357765a8b81426165f01820a9bafabddface03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 22:42:44.145380  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
	W1202 22:42:44.145426  599680 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
	I1202 22:42:44.145440  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 22:42:44.145466  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
	I1202 22:42:44.145495  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
	I1202 22:42:44.145523  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
	I1202 22:42:44.145582  599680 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
	I1202 22:42:44.146268  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 22:42:44.164933  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 22:42:44.183912  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 22:42:44.202041  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 22:42:44.219228  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 22:42:44.235763  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 22:42:44.252505  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 22:42:44.269710  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/bridge-577910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 22:42:44.287276  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
	I1202 22:42:44.304286  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 22:42:44.321058  599680 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
	I1202 22:42:44.337622  599680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 22:42:44.350027  599680 ssh_runner.go:195] Run: openssl version
	I1202 22:42:44.356178  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
	I1202 22:42:44.364444  599680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
	I1202 22:42:44.368002  599680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 21:00 /usr/share/ca-certificates/2632412.pem
	I1202 22:42:44.368065  599680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
	I1202 22:42:44.408942  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 22:42:44.417198  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 22:42:44.425354  599680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:42:44.429068  599680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 20:50 /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:42:44.429138  599680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 22:42:44.472304  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 22:42:44.480463  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
	I1202 22:42:44.488350  599680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
	I1202 22:42:44.492016  599680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 21:00 /usr/share/ca-certificates/263241.pem
	I1202 22:42:44.492122  599680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
	I1202 22:42:44.532789  599680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
	I1202 22:42:44.542597  599680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 22:42:44.547474  599680 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 22:42:44.547542  599680 kubeadm.go:401] StartCluster: {Name:bridge-577910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:bridge-577910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 22:42:44.547627  599680 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1202 22:42:44.547698  599680 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 22:42:44.574797  599680 cri.go:89] found id: ""
	I1202 22:42:44.574903  599680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 22:42:44.585579  599680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 22:42:44.594357  599680 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1202 22:42:44.594451  599680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 22:42:44.606037  599680 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 22:42:44.606067  599680 kubeadm.go:158] found existing configuration files:
	
	I1202 22:42:44.606135  599680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 22:42:44.613954  599680 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 22:42:44.614039  599680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 22:42:44.621171  599680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 22:42:44.628800  599680 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 22:42:44.628864  599680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 22:42:44.636186  599680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 22:42:44.643682  599680 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 22:42:44.643773  599680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 22:42:44.650816  599680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 22:42:44.658040  599680 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 22:42:44.658110  599680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 22:42:44.665130  599680 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 22:42:44.708700  599680 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 22:42:44.708839  599680 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 22:42:44.729909  599680 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1202 22:42:44.730052  599680 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1202 22:42:44.730116  599680 kubeadm.go:319] OS: Linux
	I1202 22:42:44.730202  599680 kubeadm.go:319] CGROUPS_CPU: enabled
	I1202 22:42:44.730270  599680 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1202 22:42:44.730344  599680 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1202 22:42:44.730417  599680 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1202 22:42:44.730513  599680 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1202 22:42:44.730586  599680 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1202 22:42:44.730661  599680 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1202 22:42:44.730740  599680 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1202 22:42:44.730817  599680 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1202 22:42:44.813198  599680 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 22:42:44.813369  599680 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 22:42:44.813499  599680 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 22:42:44.822039  599680 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> containerd <==
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569235694Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569248986Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569265026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569277383Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569295622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569310916Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569325226Z" level=info msg="runtime interface created"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569332503Z" level=info msg="created NRI interface"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569349447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569391611Z" level=info msg="Connect containerd service"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.569647005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.570228722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580119279Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580197307Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580232687Z" level=info msg="Start subscribing containerd event"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.580281530Z" level=info msg="Start recovering state"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600449758Z" level=info msg="Start event monitor"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600517374Z" level=info msg="Start cni network conf syncer for default"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600528393Z" level=info msg="Start streaming server"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600540085Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600549102Z" level=info msg="runtime interface starting up..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600555978Z" level=info msg="starting plugins..."
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.600709368Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 02 22:23:28 no-preload-904303 systemd[1]: Started containerd.service - containerd container runtime.
	Dec 02 22:23:28 no-preload-904303 containerd[555]: time="2025-12-02T22:23:28.601830385Z" level=info msg="containerd successfully booted in 0.051957s"
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1202 22:42:49.349247   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:42:49.350213   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:42:49.351863   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:42:49.352163   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1202 22:42:49.353553   10160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
	[  +4.361228] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
	[ +18.795347] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
	[ +25.695902] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
	[ +46.801539] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
	[  +9.909087] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
	[ +11.331274] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
	[ +30.586994] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
	[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 22:42:49 up  4:25,  0 user,  load average: 2.87, 1.88, 1.44
	Linux no-preload-904303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 02 22:42:45 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:42:46 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1541.
	Dec 02 22:42:46 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:46 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:46 no-preload-904303 kubelet[10023]: E1202 22:42:46.342880   10023 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:42:46 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:42:46 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1542.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:47 no-preload-904303 kubelet[10029]: E1202 22:42:47.098924   10029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1543.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:47 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:47 no-preload-904303 kubelet[10035]: E1202 22:42:47.953414   10035 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:42:47 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 02 22:42:48 no-preload-904303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1544.
	Dec 02 22:42:48 no-preload-904303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:48 no-preload-904303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 02 22:42:48 no-preload-904303 kubelet[10071]: E1202 22:42:48.913854   10071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 02 22:42:48 no-preload-904303 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 02 22:42:48 no-preload-904303 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-904303 -n no-preload-904303: exit status 2 (485.309713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "no-preload-904303" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (255.07s)
E1202 22:44:24.859880  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/flannel-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:44:27.205556  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (346/417)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 8.14
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.34
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.21
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.68
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 159.14
38 TestAddons/serial/Volcano 40.66
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 10.89
44 TestAddons/parallel/Registry 15.77
45 TestAddons/parallel/RegistryCreds 0.95
46 TestAddons/parallel/Ingress 19.88
47 TestAddons/parallel/InspektorGadget 11.94
48 TestAddons/parallel/MetricsServer 5.92
50 TestAddons/parallel/CSI 43.91
51 TestAddons/parallel/Headlamp 18.16
52 TestAddons/parallel/CloudSpanner 6.93
53 TestAddons/parallel/LocalPath 51.67
54 TestAddons/parallel/NvidiaDevicePlugin 5.6
55 TestAddons/parallel/Yakd 11.84
57 TestAddons/StoppedEnableDisable 12.31
58 TestCertOptions 36.91
59 TestCertExpiration 219.99
61 TestForceSystemdFlag 36.78
62 TestForceSystemdEnv 39.39
63 TestDockerEnvContainerd 45.26
67 TestErrorSpam/setup 32.06
68 TestErrorSpam/start 0.79
69 TestErrorSpam/status 1.49
70 TestErrorSpam/pause 1.94
71 TestErrorSpam/unpause 1.81
72 TestErrorSpam/stop 1.68
75 TestFunctional/serial/CopySyncFile 0.01
76 TestFunctional/serial/StartWithProxy 78.63
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.87
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.09
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
84 TestFunctional/serial/CacheCmd/cache/add_local 1.34
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 2
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 50.42
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.51
96 TestFunctional/serial/InvalidService 4.48
98 TestFunctional/parallel/ConfigCmd 0.57
99 TestFunctional/parallel/DashboardCmd 6.7
100 TestFunctional/parallel/DryRun 0.45
101 TestFunctional/parallel/InternationalLanguage 0.24
102 TestFunctional/parallel/StatusCmd 1.23
106 TestFunctional/parallel/ServiceCmdConnect 8.53
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 25.43
110 TestFunctional/parallel/SSHCmd 0.72
111 TestFunctional/parallel/CpCmd 2.6
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.24
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 1.07
122 TestFunctional/parallel/License 0.31
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.46
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 9.65
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
138 TestFunctional/parallel/ServiceCmd/List 0.65
139 TestFunctional/parallel/MountCmd/any-port 8.85
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
142 TestFunctional/parallel/ServiceCmd/Format 0.46
143 TestFunctional/parallel/ServiceCmd/URL 0.46
144 TestFunctional/parallel/MountCmd/specific-port 2.44
145 TestFunctional/parallel/Version/short 0.08
146 TestFunctional/parallel/Version/components 1.37
147 TestFunctional/parallel/MountCmd/VerifyCleanup 2.39
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
152 TestFunctional/parallel/ImageCommands/ImageBuild 4.17
153 TestFunctional/parallel/ImageCommands/Setup 0.63
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.4
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.51
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.38
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.02
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.81
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.95
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.99
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.18
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.71
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.15
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.76
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.57
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.51
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.7
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.34
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.66
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.14
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.09
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.46
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.66
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.39
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.18
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 169
265 TestMultiControlPlane/serial/DeployApp 39.1
266 TestMultiControlPlane/serial/PingHostFromPods 1.69
267 TestMultiControlPlane/serial/AddWorkerNode 29.96
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
270 TestMultiControlPlane/serial/CopyFile 20.64
271 TestMultiControlPlane/serial/StopSecondaryNode 12.94
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
273 TestMultiControlPlane/serial/RestartSecondaryNode 12.9
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.18
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.82
276 TestMultiControlPlane/serial/DeleteSecondaryNode 10.97
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
278 TestMultiControlPlane/serial/StopCluster 36.36
279 TestMultiControlPlane/serial/RestartCluster 59.27
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
281 TestMultiControlPlane/serial/AddSecondaryNode 60.97
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
287 TestJSONOutput/start/Command 84.95
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
293 TestJSONOutput/pause/Command 0.73
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
299 TestJSONOutput/unpause/Command 0.62
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.96
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.23
312 TestKicCustomNetwork/create_custom_network 40.81
313 TestKicCustomNetwork/use_default_bridge_network 35.23
314 TestKicExistingNetwork 38.04
315 TestKicCustomSubnet 34.59
316 TestKicStaticIP 36.97
317 TestMainNoArgs 0.05
318 TestMinikubeProfile 70.99
321 TestMountStart/serial/StartWithMountFirst 8.56
322 TestMountStart/serial/VerifyMountFirst 0.28
323 TestMountStart/serial/StartWithMountSecond 8.19
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.71
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.28
328 TestMountStart/serial/RestartStopped 7.79
329 TestMountStart/serial/VerifyMountPostStop 0.27
332 TestMultiNode/serial/FreshStart2Nodes 104.84
333 TestMultiNode/serial/DeployApp2Nodes 5.82
334 TestMultiNode/serial/PingHostFrom2Pods 0.98
335 TestMultiNode/serial/AddNode 58.07
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.71
338 TestMultiNode/serial/CopyFile 10.8
339 TestMultiNode/serial/StopNode 2.41
340 TestMultiNode/serial/StartAfterStop 7.88
341 TestMultiNode/serial/RestartKeepsNodes 79.16
342 TestMultiNode/serial/DeleteNode 5.55
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 51.82
345 TestMultiNode/serial/ValidateNameConflict 35.69
350 TestPreload 119.49
352 TestScheduledStopUnix 105.77
355 TestInsufficientStorage 12.15
356 TestRunningBinaryUpgrade 76.21
359 TestMissingContainerUpgrade 175.42
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 42.49
363 TestNoKubernetes/serial/StartWithStopK8s 19.22
364 TestNoKubernetes/serial/Start 8.34
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
367 TestNoKubernetes/serial/ProfileList 1.25
368 TestNoKubernetes/serial/Stop 1.38
369 TestNoKubernetes/serial/StartNoArgs 7.45
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
371 TestStoppedBinaryUpgrade/Setup 11.16
372 TestStoppedBinaryUpgrade/Upgrade 52.61
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.51
382 TestPause/serial/Start 52.52
383 TestPause/serial/SecondStartNoReconfiguration 6.52
384 TestPause/serial/Pause 0.74
385 TestPause/serial/VerifyStatus 0.34
386 TestPause/serial/Unpause 0.65
387 TestPause/serial/PauseAgain 0.81
388 TestPause/serial/DeletePaused 3.14
389 TestPause/serial/VerifyDeletedResources 0.4
397 TestNetworkPlugins/group/false 3.63
402 TestStartStop/group/old-k8s-version/serial/FirstStart 63.42
403 TestStartStop/group/old-k8s-version/serial/DeployApp 10.47
404 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
405 TestStartStop/group/old-k8s-version/serial/Stop 12.19
406 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
407 TestStartStop/group/old-k8s-version/serial/SecondStart 48.34
408 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
409 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
412 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
413 TestStartStop/group/old-k8s-version/serial/Pause 3.74
415 TestStartStop/group/embed-certs/serial/FirstStart 54.59
416 TestStartStop/group/embed-certs/serial/DeployApp 9.33
417 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
418 TestStartStop/group/embed-certs/serial/Stop 12.05
419 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
420 TestStartStop/group/embed-certs/serial/SecondStart 58.44
421 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
423 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
424 TestStartStop/group/embed-certs/serial/Pause 2.99
426 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.19
427 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
428 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
429 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.33
432 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
433 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
434 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
435 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
440 TestStartStop/group/no-preload/serial/Stop 1.29
441 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
443 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/Stop 1.31
446 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
453 TestNetworkPlugins/group/auto/Start 79.82
454 TestNetworkPlugins/group/auto/KubeletFlags 0.31
455 TestNetworkPlugins/group/auto/NetCatPod 9.29
456 TestNetworkPlugins/group/auto/DNS 0.18
457 TestNetworkPlugins/group/auto/Localhost 0.17
458 TestNetworkPlugins/group/auto/HairPin 0.15
459 TestNetworkPlugins/group/kindnet/Start 77.82
460 TestNetworkPlugins/group/kindnet/ControllerPod 6
461 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
462 TestNetworkPlugins/group/kindnet/NetCatPod 8.28
463 TestNetworkPlugins/group/kindnet/DNS 0.2
464 TestNetworkPlugins/group/kindnet/Localhost 0.14
465 TestNetworkPlugins/group/kindnet/HairPin 0.15
466 TestNetworkPlugins/group/flannel/Start 55.7
468 TestNetworkPlugins/group/flannel/ControllerPod 6.01
469 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
470 TestNetworkPlugins/group/flannel/NetCatPod 8.27
471 TestNetworkPlugins/group/flannel/DNS 0.16
472 TestNetworkPlugins/group/flannel/Localhost 0.14
473 TestNetworkPlugins/group/flannel/HairPin 0.15
474 TestNetworkPlugins/group/custom-flannel/Start 56.86
475 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
476 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
477 TestNetworkPlugins/group/custom-flannel/DNS 0.17
478 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
479 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
480 TestNetworkPlugins/group/enable-default-cni/Start 44.75
481 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
482 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
483 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
484 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
485 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
486 TestNetworkPlugins/group/bridge/Start 73.95
487 TestNetworkPlugins/group/calico/Start 60.7
488 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
489 TestNetworkPlugins/group/bridge/NetCatPod 9.37
490 TestNetworkPlugins/group/calico/ControllerPod 6
491 TestNetworkPlugins/group/bridge/DNS 0.24
492 TestNetworkPlugins/group/bridge/Localhost 0.15
493 TestNetworkPlugins/group/bridge/HairPin 0.18
494 TestNetworkPlugins/group/calico/KubeletFlags 0.32
495 TestNetworkPlugins/group/calico/NetCatPod 10.29
496 TestNetworkPlugins/group/calico/DNS 0.25
497 TestNetworkPlugins/group/calico/Localhost 0.2
498 TestNetworkPlugins/group/calico/HairPin 0.29
x
+
TestDownloadOnly/v1.28.0/json-events (7.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-424785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-424785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.633495043s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 20:49:43.741508  263241 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1202 20:49:43.741597  263241 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-424785
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-424785: exit status 85 (93.352479ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-424785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-424785 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:49:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:49:36.150872  263247 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:49:36.151020  263247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:36.151036  263247 out.go:374] Setting ErrFile to fd 2...
	I1202 20:49:36.151042  263247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:36.151610  263247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	W1202 20:49:36.151769  263247 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-261381/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-261381/.minikube/config/config.json: no such file or directory
	I1202 20:49:36.152205  263247 out.go:368] Setting JSON to true
	I1202 20:49:36.153008  263247 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9115,"bootTime":1764699462,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:49:36.153085  263247 start.go:143] virtualization:  
	I1202 20:49:36.158780  263247 out.go:99] [download-only-424785] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1202 20:49:36.158964  263247 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 20:49:36.159014  263247 notify.go:221] Checking for updates...
	I1202 20:49:36.162312  263247 out.go:171] MINIKUBE_LOCATION=21997
	I1202 20:49:36.165217  263247 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:49:36.168257  263247 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 20:49:36.171067  263247 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 20:49:36.173906  263247 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 20:49:36.179397  263247 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 20:49:36.179669  263247 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:49:36.201636  263247 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:49:36.201789  263247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:36.258212  263247 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 20:49:36.249557658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:36.258316  263247 docker.go:319] overlay module found
	I1202 20:49:36.261187  263247 out.go:99] Using the docker driver based on user configuration
	I1202 20:49:36.261221  263247 start.go:309] selected driver: docker
	I1202 20:49:36.261234  263247 start.go:927] validating driver "docker" against <nil>
	I1202 20:49:36.261329  263247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:36.319421  263247 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-02 20:49:36.310658839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:36.319573  263247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:49:36.319882  263247 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 20:49:36.320037  263247 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 20:49:36.323128  263247 out.go:171] Using Docker driver with root privileges
	I1202 20:49:36.325959  263247 cni.go:84] Creating CNI manager for ""
	I1202 20:49:36.326030  263247 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 20:49:36.326045  263247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:49:36.326122  263247 start.go:353] cluster config:
	{Name:download-only-424785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-424785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:49:36.329117  263247 out.go:99] Starting "download-only-424785" primary control-plane node in "download-only-424785" cluster
	I1202 20:49:36.329147  263247 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 20:49:36.332045  263247 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:49:36.332093  263247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 20:49:36.332234  263247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:49:36.348510  263247 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 20:49:36.348694  263247 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 20:49:36.348802  263247 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 20:49:36.388469  263247 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1202 20:49:36.388497  263247 cache.go:65] Caching tarball of preloaded images
	I1202 20:49:36.388671  263247 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1202 20:49:36.392013  263247 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 20:49:36.392043  263247 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1202 20:49:36.478248  263247 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1202 20:49:36.478382  263247 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-424785 host does not exist
	  To start a cluster, run: "minikube start -p download-only-424785"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-424785
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-957967 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-957967 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.137036362s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 20:49:52.339326  263241 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
I1202 20:49:52.339359  263241 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-957967
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-957967: exit status 85 (87.069799ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-424785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-424785 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ delete  │ -p download-only-424785                                                                                                                                                               │ download-only-424785 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ start   │ -o=json --download-only -p download-only-957967 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-957967 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:49:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:49:44.251582  263448 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:49:44.251768  263448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:44.251780  263448 out.go:374] Setting ErrFile to fd 2...
	I1202 20:49:44.251785  263448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:44.252130  263448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 20:49:44.252593  263448 out.go:368] Setting JSON to true
	I1202 20:49:44.253378  263448 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9123,"bootTime":1764699462,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:49:44.253466  263448 start.go:143] virtualization:  
	I1202 20:49:44.256729  263448 out.go:99] [download-only-957967] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:49:44.257006  263448 notify.go:221] Checking for updates...
	I1202 20:49:44.260528  263448 out.go:171] MINIKUBE_LOCATION=21997
	I1202 20:49:44.263451  263448 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:49:44.266403  263448 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 20:49:44.269233  263448 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 20:49:44.272159  263448 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 20:49:44.277720  263448 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 20:49:44.278013  263448 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:49:44.307222  263448 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:49:44.307325  263448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:44.366710  263448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-02 20:49:44.356951756 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:44.366809  263448 docker.go:319] overlay module found
	I1202 20:49:44.369768  263448 out.go:99] Using the docker driver based on user configuration
	I1202 20:49:44.369808  263448 start.go:309] selected driver: docker
	I1202 20:49:44.369815  263448 start.go:927] validating driver "docker" against <nil>
	I1202 20:49:44.369913  263448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:44.426035  263448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-02 20:49:44.417646261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:44.426189  263448 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:49:44.426447  263448 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 20:49:44.426598  263448 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 20:49:44.429725  263448 out.go:171] Using Docker driver with root privileges
	I1202 20:49:44.432484  263448 cni.go:84] Creating CNI manager for ""
	I1202 20:49:44.432551  263448 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1202 20:49:44.432565  263448 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 20:49:44.432636  263448 start.go:353] cluster config:
	{Name:download-only-957967 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-957967 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:49:44.435589  263448 out.go:99] Starting "download-only-957967" primary control-plane node in "download-only-957967" cluster
	I1202 20:49:44.435610  263448 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1202 20:49:44.438382  263448 out.go:99] Pulling base image v0.0.48-1764169655-21974 ...
	I1202 20:49:44.438417  263448 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 20:49:44.438452  263448 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
	I1202 20:49:44.453878  263448 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b to local cache
	I1202 20:49:44.454014  263448 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory
	I1202 20:49:44.454034  263448 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local cache directory, skipping pull
	I1202 20:49:44.454039  263448 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in cache, skipping pull
	I1202 20:49:44.454050  263448 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b as a tarball
	I1202 20:49:44.500081  263448 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	I1202 20:49:44.500118  263448 cache.go:65] Caching tarball of preloaded images
	I1202 20:49:44.500281  263448 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime containerd
	I1202 20:49:44.503273  263448 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 20:49:44.503302  263448 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1202 20:49:44.597214  263448 preload.go:295] Got checksum from GCS API "cd1a05d5493c9270e248bf47fb3f071d"
	I1202 20:49:44.597267  263448 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:cd1a05d5493c9270e248bf47fb3f071d -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-957967 host does not exist
	  To start a cluster, run: "minikube start -p download-only-957967"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-957967
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-414085 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-414085 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (2.335900478s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-414085
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-414085: exit status 85 (89.037621ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                             ARGS                                                                                             │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-424785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-424785 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ delete  │ -p download-only-424785                                                                                                                                                                      │ download-only-424785 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ start   │ -o=json --download-only -p download-only-957967 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd        │ download-only-957967 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ delete  │ -p download-only-957967                                                                                                                                                                      │ download-only-957967 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │ 02 Dec 25 20:49 UTC │
	│ start   │ -o=json --download-only -p download-only-414085 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-414085 │ jenkins │ v1.37.0 │ 02 Dec 25 20:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:49:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:49:52.835606  263643 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:49:52.835776  263643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:52.835787  263643 out.go:374] Setting ErrFile to fd 2...
	I1202 20:49:52.835793  263643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:49:52.836034  263643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 20:49:52.836432  263643 out.go:368] Setting JSON to true
	I1202 20:49:52.837231  263643 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9131,"bootTime":1764699462,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 20:49:52.837300  263643 start.go:143] virtualization:  
	I1202 20:49:52.840624  263643 out.go:99] [download-only-414085] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 20:49:52.840843  263643 notify.go:221] Checking for updates...
	I1202 20:49:52.843784  263643 out.go:171] MINIKUBE_LOCATION=21997
	I1202 20:49:52.846720  263643 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:49:52.849702  263643 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 20:49:52.852601  263643 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 20:49:52.855395  263643 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1202 20:49:52.861005  263643 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 20:49:52.861315  263643 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:49:52.881870  263643 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 20:49:52.881991  263643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:52.949279  263643 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-02 20:49:52.940094778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:52.949393  263643 docker.go:319] overlay module found
	I1202 20:49:52.952231  263643 out.go:99] Using the docker driver based on user configuration
	I1202 20:49:52.952268  263643 start.go:309] selected driver: docker
	I1202 20:49:52.952275  263643 start.go:927] validating driver "docker" against <nil>
	I1202 20:49:52.952391  263643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 20:49:53.014235  263643 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-02 20:49:53.003530384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 20:49:53.014392  263643 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 20:49:53.014699  263643 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1202 20:49:53.014854  263643 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 20:49:53.017950  263643 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-414085 host does not exist
	  To start a cluster, run: "minikube start -p download-only-414085"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-414085
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 20:49:56.569338  263241 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-900610 --alsologtostderr --binary-mirror http://127.0.0.1:38189 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-900610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-900610
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-409059
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-409059: exit status 85 (79.357811ms)

                                                
                                                
-- stdout --
	* Profile "addons-409059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-409059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-409059
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-409059: exit status 85 (84.015501ms)

                                                
                                                
-- stdout --
	* Profile "addons-409059" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-409059"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (159.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-409059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-409059 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m39.14455698s)
--- PASS: TestAddons/Setup (159.14s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 56.443024ms
addons_test.go:876: volcano-admission stabilized in 56.643026ms
addons_test.go:868: volcano-scheduler stabilized in 56.887335ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-bqgzk" [a3e2be1a-c09f-47f2-98bc-58aafedb0bc5] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003035484s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-76l5x" [580b7e4f-0ccb-4c97-a8a1-bfc724e74487] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00308551s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-zq4tg" [c45b81b7-63c5-4300-a0b1-d5a16d5b2b88] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003479717s
addons_test.go:903: (dbg) Run:  kubectl --context addons-409059 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-409059 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-409059 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [c42e5dcb-f2db-47d0-a64e-5120f6b47ea6] Pending
helpers_test.go:352: "test-job-nginx-0" [c42e5dcb-f2db-47d0-a64e-5120f6b47ea6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [c42e5dcb-f2db-47d0-a64e-5120f6b47ea6] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003516505s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable volcano --alsologtostderr -v=1: (11.962654361s)
--- PASS: TestAddons/serial/Volcano (40.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-409059 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-409059 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-409059 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-409059 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [958db1e5-4f53-4ed8-ba2d-d9a25762d71b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [958db1e5-4f53-4ed8-ba2d-d9a25762d71b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003888949s
addons_test.go:694: (dbg) Run:  kubectl --context addons-409059 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-409059 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-409059 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-409059 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.010776ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6j4kg" [e570ae69-0c72-4b10-9f55-cbd3c4cf2346] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004458489s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-zqpqs" [59405c2c-8e99-45a5-a412-0bd9d38eac31] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003235584s
addons_test.go:392: (dbg) Run:  kubectl --context addons-409059 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-409059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-409059 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.756410968s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 ip
2025/12/02 20:53:52 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 9.884901ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-409059
addons_test.go:332: (dbg) Run:  kubectl --context addons-409059 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-409059 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-409059 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-409059 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e6833191-26c1-4862-af61-f3884f5bc86f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [e6833191-26c1-4862-af61-f3884f5bc86f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003310847s
I1202 20:55:12.359949  263241 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-409059 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable ingress-dns --alsologtostderr -v=1: (1.37269018s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable ingress --alsologtostderr -v=1: (7.836146446s)
--- PASS: TestAddons/parallel/Ingress (19.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-kw4zw" [3218818f-4269-42ce-a07a-15553c1a287f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004280033s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable inspektor-gadget --alsologtostderr -v=1: (5.930364081s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.732081ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jkszf" [d1a1f7f1-e406-4df9-9b6a-045ff4e54150] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004338769s
addons_test.go:463: (dbg) Run:  kubectl --context addons-409059 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 20:54:19.997722  263241 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 20:54:20.001440  263241 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 20:54:20.001464  263241 kapi.go:107] duration metric: took 6.812026ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.82257ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-409059 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-409059 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e1694a39-ecc4-418a-9512-19167d3a0237] Pending
helpers_test.go:352: "task-pv-pod" [e1694a39-ecc4-418a-9512-19167d3a0237] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e1694a39-ecc4-418a-9512-19167d3a0237] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004148316s
addons_test.go:572: (dbg) Run:  kubectl --context addons-409059 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-409059 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-409059 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-409059 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-409059 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-409059 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-409059 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [76f0857c-705c-4815-90e5-95fd6cb9e380] Pending
helpers_test.go:352: "task-pv-pod-restore" [76f0857c-705c-4815-90e5-95fd6cb9e380] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [76f0857c-705c-4815-90e5-95fd6cb9e380] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003243858s
addons_test.go:614: (dbg) Run:  kubectl --context addons-409059 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-409059 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-409059 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.025623686s)
--- PASS: TestAddons/parallel/CSI (43.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-409059 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-409059 --alsologtostderr -v=1: (1.384200307s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-4dp55" [f98b11b2-5450-4e72-bd95-7b05c95828a0] Pending
helpers_test.go:352: "headlamp-dfcdc64b-4dp55" [f98b11b2-5450-4e72-bd95-7b05c95828a0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-4dp55" [f98b11b2-5450-4e72-bd95-7b05c95828a0] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003257293s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable headlamp --alsologtostderr -v=1: (5.769042221s)
--- PASS: TestAddons/parallel/Headlamp (18.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-lf49d" [b2520f63-19f9-4a89-8056-9940276fa1e1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002958915s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.93s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.67s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-409059 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-409059 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [be21c470-3d09-439a-bd69-8d3bd0c5dc9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [be21c470-3d09-439a-bd69-8d3bd0c5dc9c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [be21c470-3d09-439a-bd69-8d3bd0c5dc9c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002946116s
addons_test.go:967: (dbg) Run:  kubectl --context addons-409059 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 ssh "cat /opt/local-path-provisioner/pvc-79f5195c-a520-4e0b-ad33-7a0d110dae77_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-409059 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-409059 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.247276281s)
--- PASS: TestAddons/parallel/LocalPath (51.67s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9v7f7" [edc83180-ff88-46d2-801f-7cab1890f3b0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006083392s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-p2g94" [b01f94ed-e50d-4495-9228-7a3a93f17ade] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004059954s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-409059 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-409059 addons disable yakd --alsologtostderr -v=1: (5.835890598s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-409059
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-409059: (12.03635623s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-409059
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-409059
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-409059
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (36.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-309892 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-309892 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.717313169s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-309892 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-309892 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-309892 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-309892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-309892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-309892: (2.446414114s)
--- PASS: TestCertOptions (36.91s)

                                                
                                    
x
+
TestCertExpiration (219.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-859548 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-859548 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (30.883101645s)
E1202 22:07:36.513797  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:07:50.415984  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-859548 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E1202 22:09:44.123059  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-859548 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.761674582s)
helpers_test.go:175: Cleaning up "cert-expiration-859548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-859548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-859548: (2.339629444s)
--- PASS: TestCertExpiration (219.99s)

                                                
                                    
x
+
TestForceSystemdFlag (36.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-090918 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1202 22:04:44.123498  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-090918 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.370706929s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-090918 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-090918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-090918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-090918: (2.093785055s)
--- PASS: TestForceSystemdFlag (36.78s)

                                                
                                    
x
+
TestForceSystemdEnv (39.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-573431 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1202 22:05:53.487989  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-573431 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.988255386s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-573431 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-573431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-573431
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-573431: (2.074316755s)
--- PASS: TestForceSystemdEnv (39.39s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.26s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-414427 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-414427 --driver=docker  --container-runtime=containerd: (29.402277848s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-414427"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-414427": (1.076862808s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-z8wPZAw1cXON/agent.282550" SSH_AGENT_PID="282551" DOCKER_HOST=ssh://docker@127.0.0.1:33093 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-z8wPZAw1cXON/agent.282550" SSH_AGENT_PID="282551" DOCKER_HOST=ssh://docker@127.0.0.1:33093 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-z8wPZAw1cXON/agent.282550" SSH_AGENT_PID="282551" DOCKER_HOST=ssh://docker@127.0.0.1:33093 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.239303099s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-z8wPZAw1cXON/agent.282550" SSH_AGENT_PID="282551" DOCKER_HOST=ssh://docker@127.0.0.1:33093 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-414427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-414427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-414427: (2.062476108s)
--- PASS: TestDockerEnvContainerd (45.26s)

                                                
                                    
x
+
TestErrorSpam/setup (32.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-179399 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-179399 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-179399 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-179399 --driver=docker  --container-runtime=containerd: (32.06029569s)
--- PASS: TestErrorSpam/setup (32.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 status
--- PASS: TestErrorSpam/status (1.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 stop: (1.472993365s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-179399 --log_dir /tmp/nospam-179399 stop
--- PASS: TestErrorSpam/stop (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1202 20:57:36.520502  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.526922  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.538273  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.559617  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.600986  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.682327  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:36.843803  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:37.165398  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:37.806832  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:39.088353  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:41.649780  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:46.771735  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:57:57.014059  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:58:17.495663  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-446665 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.626073675s)
--- PASS: TestFunctional/serial/StartWithProxy (78.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 20:58:29.440329  263241 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-446665 --alsologtostderr -v=8: (6.865454015s)
functional_test.go:678: soft start took 6.866858087s for "functional-446665" cluster.
I1202 20:58:36.306125  263241 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (6.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-446665 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:3.1: (1.346206557s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:3.3: (1.162841937s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 cache add registry.k8s.io/pause:latest: (1.022173344s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-446665 /tmp/TestFunctionalserialCacheCmdcacheadd_local1847747563/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache add minikube-local-cache-test:functional-446665
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache delete minikube-local-cache-test:functional-446665
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-446665
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (411.284289ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 kubectl -- --context functional-446665 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-446665 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1202 20:58:58.457698  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-446665 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.419929629s)
functional_test.go:776: restart took 50.420037277s for "functional-446665" cluster.
I1202 20:59:34.567921  263241 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (50.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-446665 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 logs: (1.466873802s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 logs --file /tmp/TestFunctionalserialLogsFileCmd2120002813/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 logs --file /tmp/TestFunctionalserialLogsFileCmd2120002813/001/logs.txt: (1.504307981s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-446665 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-446665
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-446665: exit status 115 (394.354507ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32326 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-446665 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 config get cpus: exit status 14 (163.087757ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 config get cpus: exit status 14 (70.743253ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-446665 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-446665 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 297868: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-446665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.710914ms)

                                                
                                                
-- stdout --
	* [functional-446665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:00:15.294758  297574 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:00:15.294911  297574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:15.294931  297574 out.go:374] Setting ErrFile to fd 2...
	I1202 21:00:15.294942  297574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:15.295241  297574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:00:15.295645  297574 out.go:368] Setting JSON to false
	I1202 21:00:15.296655  297574 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9754,"bootTime":1764699462,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:00:15.296732  297574 start.go:143] virtualization:  
	I1202 21:00:15.300025  297574 out.go:179] * [functional-446665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:00:15.303933  297574 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:00:15.304094  297574 notify.go:221] Checking for updates...
	I1202 21:00:15.310130  297574 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:00:15.312877  297574 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:00:15.315719  297574 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:00:15.318684  297574 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:00:15.321560  297574 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:00:15.324992  297574 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:00:15.325618  297574 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:00:15.361397  297574 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:00:15.361511  297574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:00:15.423868  297574 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-02 21:00:15.412325192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:00:15.423980  297574 docker.go:319] overlay module found
	I1202 21:00:15.427435  297574 out.go:179] * Using the docker driver based on existing profile
	I1202 21:00:15.430163  297574 start.go:309] selected driver: docker
	I1202 21:00:15.430189  297574 start.go:927] validating driver "docker" against &{Name:functional-446665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-446665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:00:15.430305  297574 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:00:15.433102  297574 out.go:203] 
	W1202 21:00:15.436742  297574 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 21:00:15.439683  297574 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-446665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (243.131165ms)

                                                
                                                
-- stdout --
	* [functional-446665] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:00:15.090870  297526 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:00:15.091082  297526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:15.091109  297526 out.go:374] Setting ErrFile to fd 2...
	I1202 21:00:15.091131  297526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:00:15.092232  297526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:00:15.092734  297526 out.go:368] Setting JSON to false
	I1202 21:00:15.093802  297526 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9754,"bootTime":1764699462,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:00:15.093906  297526 start.go:143] virtualization:  
	I1202 21:00:15.097692  297526 out.go:179] * [functional-446665] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:00:15.101541  297526 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:00:15.101618  297526 notify.go:221] Checking for updates...
	I1202 21:00:15.107908  297526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:00:15.110892  297526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:00:15.114232  297526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:00:15.118270  297526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:00:15.122329  297526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:00:15.126271  297526 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:00:15.126966  297526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:00:15.159903  297526 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:00:15.160027  297526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:00:15.224984  297526 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-02 21:00:15.215811785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:00:15.225090  297526 docker.go:319] overlay module found
	I1202 21:00:15.228188  297526 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:00:15.230864  297526 start.go:309] selected driver: docker
	I1202 21:00:15.230885  297526 start.go:927] validating driver "docker" against &{Name:functional-446665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-446665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:00:15.230981  297526 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:00:15.234402  297526 out.go:203] 
	W1202 21:00:15.237367  297526 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:00:15.240296  297526 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-446665 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-446665 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-mxddf" [e6d99cd2-a744-4cfc-b7c3-67e558db6cbe] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-mxddf" [e6d99cd2-a744-4cfc-b7c3-67e558db6cbe] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008423521s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service hello-node-connect --url
functional_test.go:1654: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 service hello-node-connect --url: (1.246262552s)
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32732
functional_test.go:1680: http://192.168.49.2:32732: success! body:
Request served by hello-node-connect-7d85dfc575-mxddf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32732
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5dabb04c-71e0-49fb-ab6e-6d17e3b580ce] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004154201s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-446665 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-446665 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-446665 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-446665 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8a9d2581-2e46-4039-a8b4-42ee4ac17e53] Pending
helpers_test.go:352: "sp-pod" [8a9d2581-2e46-4039-a8b4-42ee4ac17e53] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8a9d2581-2e46-4039-a8b4-42ee4ac17e53] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00367682s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-446665 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-446665 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-446665 delete -f testdata/storage-provisioner/pod.yaml: (1.903573146s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-446665 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7e0bd230-91e3-491e-9b8d-56b13231a614] Pending
helpers_test.go:352: "sp-pod" [7e0bd230-91e3-491e-9b8d-56b13231a614] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7e0bd230-91e3-491e-9b8d-56b13231a614] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003442563s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-446665 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh -n functional-446665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cp functional-446665:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3747902725/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh -n functional-446665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh -n functional-446665 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/263241/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /etc/test/nested/copy/263241/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/263241.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /etc/ssl/certs/263241.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/263241.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /usr/share/ca-certificates/263241.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2632412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /etc/ssl/certs/2632412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2632412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /usr/share/ca-certificates/2632412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-446665 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "sudo systemctl is-active docker": exit status 1 (375.754436ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "sudo systemctl is-active crio": exit status 1 (698.228943ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 294856: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-446665 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c50a8d02-de30-42f2-8b0c-3d8ee7b1ff4d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c50a8d02-de30-42f2-8b0c-3d8ee7b1ff4d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004106243s
I1202 20:59:52.582663  263241 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-446665 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.73.211 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-446665 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-446665 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-446665 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l779l" [14024ca3-aa06-42c4-a0bd-7fff70fd38ec] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-l779l" [14024ca3-aa06-42c4-a0bd-7fff70fd38ec] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003869427s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "377.761052ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "65.305514ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "443.411417ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "73.269796ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdany-port4094932803/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764709211565736217" to /tmp/TestFunctionalparallelMountCmdany-port4094932803/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764709211565736217" to /tmp/TestFunctionalparallelMountCmdany-port4094932803/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764709211565736217" to /tmp/TestFunctionalparallelMountCmdany-port4094932803/001/test-1764709211565736217
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (474.765634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:00:12.043251  263241 retry.go:31] will retry after 601.98064ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 21:00 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 21:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 21:00 test-1764709211565736217
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh cat /mount-9p/test-1764709211565736217
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-446665 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [17315bf9-65a1-48c3-a078-4c726d7442d7] Pending
helpers_test.go:352: "busybox-mount" [17315bf9-65a1-48c3-a078-4c726d7442d7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [17315bf9-65a1-48c3-a078-4c726d7442d7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [17315bf9-65a1-48c3-a078-4c726d7442d7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005519535s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-446665 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdany-port4094932803/001:/mount-9p --alsologtostderr -v=1] ...
E1202 21:00:20.379735  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service list -o json
functional_test.go:1504: Took "623.593958ms" to run "out/minikube-linux-arm64 -p functional-446665 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31713
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31713
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdspecific-port2765902603/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (564.878168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:00:20.974997  263241 retry.go:31] will retry after 513.09209ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh -- ls -la /mount-9p
2025/12/02 21:00:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdspecific-port2765902603/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "sudo umount -f /mount-9p": exit status 1 (298.695733ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-446665 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdspecific-port2765902603/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 version -o=json --components: (1.367011014s)
--- PASS: TestFunctional/parallel/Version/components (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T" /mount1: exit status 1 (805.536598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:00:23.666186  263241 retry.go:31] will retry after 338.674926ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-446665 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3844381626/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446665 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-446665
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-446665
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446665 image ls --format short --alsologtostderr:
I1202 21:00:30.774249  300639 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:30.774384  300639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:30.774396  300639 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:30.774403  300639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:30.774697  300639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:30.775483  300639 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:30.775648  300639 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:30.776216  300639 cli_runner.go:164] Run: docker container inspect functional-446665 --format={{.State.Status}}
I1202 21:00:30.795623  300639 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:30.795685  300639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446665
I1202 21:00:30.815111  300639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-446665/id_rsa Username:docker}
I1202 21:00:30.925186  300639 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446665 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.2            │ sha256:1b3491 │ 20.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2            │ sha256:4f982e │ 15.8MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.2            │ sha256:94bff1 │ 22.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2            │ sha256:b178af │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kicbase/echo-server               │ functional-446665  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-446665  │ sha256:6a4d71 │ 991B   │
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:2c5f0d │ 21.1MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446665 image ls --format table --alsologtostderr:
I1202 21:00:31.049542  300716 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:31.049750  300716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.049759  300716 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:31.049764  300716 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.050045  300716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:31.050689  300716 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.050817  300716 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.051456  300716 cli_runner.go:164] Run: docker container inspect functional-446665 --format={{.State.Status}}
I1202 21:00:31.078137  300716 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:31.078196  300716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446665
I1202 21:00:31.110216  300716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-446665/id_rsa Username:docker}
I1202 21:00:31.227991  300716 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446665 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-446665"],"size":"2173567"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b46108996
9449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21136588"},{"id":"sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-p
roxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"22802260"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.
io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"24559643"},{"id":"sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"20718696"},{"id":"sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"15775785"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2","repoDigests":[],"repoTags":["docker.io/library/minikube
-local-cache-test:functional-446665"],"size":"991"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446665 image ls --format json --alsologtostderr:
I1202 21:00:31.035093  300711 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:31.035210  300711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.035221  300711 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:31.035227  300711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.035586  300711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:31.036496  300711 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.036638  300711 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.037474  300711 cli_runner.go:164] Run: docker container inspect functional-446665 --format={{.State.Status}}
I1202 21:00:31.060008  300711 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:31.060068  300711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446665
I1202 21:00:31.079489  300711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-446665/id_rsa Username:docker}
I1202 21:00:31.192453  300711 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446665 image ls --format yaml --alsologtostderr:
- id: sha256:b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "24559643"
- id: sha256:4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "15775785"
- id: sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-446665
size: "991"
- id: sha256:94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "22802260"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-446665
size: "2173567"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21136588"
- id: sha256:1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "20718696"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446665 image ls --format yaml --alsologtostderr:
I1202 21:00:30.773628  300640 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:30.773874  300640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:30.773904  300640 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:30.773925  300640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:30.774212  300640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:30.775040  300640 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:30.775228  300640 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:30.775855  300640 cli_runner.go:164] Run: docker container inspect functional-446665 --format={{.State.Status}}
I1202 21:00:30.793476  300640 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:30.793526  300640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446665
I1202 21:00:30.812861  300640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-446665/id_rsa Username:docker}
I1202 21:00:30.917030  300640 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446665 ssh pgrep buildkitd: exit status 1 (288.636724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr: (3.639344793s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr:
I1202 21:00:31.570663  300851 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:31.571434  300851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.571476  300851 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:31.571496  300851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:31.571768  300851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:31.572425  300851 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.574850  300851 config.go:182] Loaded profile config "functional-446665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
I1202 21:00:31.575537  300851 cli_runner.go:164] Run: docker container inspect functional-446665 --format={{.State.Status}}
I1202 21:00:31.594641  300851 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:31.594694  300851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446665
I1202 21:00:31.613055  300851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-446665/id_rsa Username:docker}
I1202 21:00:31.721547  300851 build_images.go:162] Building image from path: /tmp/build.1055058338.tar
I1202 21:00:31.721620  300851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 21:00:31.729518  300851 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1055058338.tar
I1202 21:00:31.733160  300851 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1055058338.tar: stat -c "%s %y" /var/lib/minikube/build/build.1055058338.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1055058338.tar': No such file or directory
I1202 21:00:31.733233  300851 ssh_runner.go:362] scp /tmp/build.1055058338.tar --> /var/lib/minikube/build/build.1055058338.tar (3072 bytes)
I1202 21:00:31.751246  300851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1055058338
I1202 21:00:31.759539  300851 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1055058338 -xf /var/lib/minikube/build/build.1055058338.tar
I1202 21:00:31.767657  300851 containerd.go:394] Building image: /var/lib/minikube/build/build.1055058338
I1202 21:00:31.767728  300851 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1055058338 --local dockerfile=/var/lib/minikube/build/build.1055058338 --output type=image,name=localhost/my-image:functional-446665
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:96fc1701ffa0ee0250fdcca1cf1d4983bfe414b6dc62a49441557805a2430f50
#8 exporting manifest sha256:96fc1701ffa0ee0250fdcca1cf1d4983bfe414b6dc62a49441557805a2430f50 0.0s done
#8 exporting config sha256:cfd83f2a970f1679ed89ab6c9445e9dca808c2a0dfa4dd52dcb8aa9ed27ab68b 0.0s done
#8 naming to localhost/my-image:functional-446665 done
#8 DONE 0.2s
I1202 21:00:35.125479  300851 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1055058338 --local dockerfile=/var/lib/minikube/build/build.1055058338 --output type=image,name=localhost/my-image:functional-446665: (3.357721157s)
I1202 21:00:35.125554  300851 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1055058338
I1202 21:00:35.135498  300851 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1055058338.tar
I1202 21:00:35.144900  300851 build_images.go:218] Built localhost/my-image:functional-446665 from /tmp/build.1055058338.tar
I1202 21:00:35.144934  300851 build_images.go:134] succeeded building to: functional-446665
I1202 21:00:35.144940  300851 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-446665
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr: (1.105192543s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-446665
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image save kicbase/echo-server:functional-446665 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image rm kicbase/echo-server:functional-446665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-446665
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-446665 image save --daemon kicbase/echo-server:functional-446665 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-446665
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-446665
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-446665
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-446665
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:3.1: (1.177610472s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:3.3: (1.173611706s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 cache add registry.k8s.io/pause:latest: (1.02533986s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3331894008/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache add minikube-local-cache-test:functional-753958
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache delete minikube-local-cache-test:functional-753958
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.624447ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1063638298/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 config get cpus: exit status 14 (65.245017ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 config get cpus: exit status 14 (70.725557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (185.817965ms)

                                                
                                                
-- stdout --
	* [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:29:58.870319  330699 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:58.870505  330699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:58.870532  330699 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:58.870552  330699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:58.870927  330699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:29:58.871393  330699 out.go:368] Setting JSON to false
	I1202 21:29:58.872768  330699 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11537,"bootTime":1764699462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:29:58.872863  330699 start.go:143] virtualization:  
	I1202 21:29:58.876228  330699 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 21:29:58.879045  330699 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:58.879112  330699 notify.go:221] Checking for updates...
	I1202 21:29:58.884796  330699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:58.887702  330699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:29:58.890470  330699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:29:58.893315  330699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:58.896218  330699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:58.899561  330699 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:58.900190  330699 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:58.935987  330699 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:58.936119  330699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:58.992520  330699 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:58.983523366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:58.992617  330699 docker.go:319] overlay module found
	I1202 21:29:58.995714  330699 out.go:179] * Using the docker driver based on existing profile
	I1202 21:29:58.998440  330699 start.go:309] selected driver: docker
	I1202 21:29:58.998460  330699 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:58.998560  330699 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:59.002085  330699 out.go:203] 
	W1202 21:29:59.005197  330699 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 21:29:59.008137  330699 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 23 (183.742661ms)

                                                
                                                
-- stdout --
	* [functional-753958] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:29:58.692995  330651 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:29:58.693175  330651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:58.693206  330651 out.go:374] Setting ErrFile to fd 2...
	I1202 21:29:58.693227  330651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:29:58.693594  330651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:29:58.694047  330651 out.go:368] Setting JSON to false
	I1202 21:29:58.694897  330651 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":11537,"bootTime":1764699462,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 21:29:58.694983  330651 start.go:143] virtualization:  
	I1202 21:29:58.698529  330651 out.go:179] * [functional-753958] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1202 21:29:58.702229  330651 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 21:29:58.702321  330651 notify.go:221] Checking for updates...
	I1202 21:29:58.708219  330651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 21:29:58.711031  330651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 21:29:58.713823  330651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 21:29:58.716516  330651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 21:29:58.719388  330651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 21:29:58.722649  330651 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 21:29:58.723252  330651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 21:29:58.751143  330651 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 21:29:58.751259  330651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:29:58.806630  330651 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 21:29:58.797891331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:29:58.806733  330651 docker.go:319] overlay module found
	I1202 21:29:58.809780  330651 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1202 21:29:58.812646  330651 start.go:309] selected driver: docker
	I1202 21:29:58.812667  330651 start.go:927] validating driver "docker" against &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 21:29:58.812769  330651 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 21:29:58.816318  330651 out.go:203] 
	W1202 21:29:58.819196  330651 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 21:29:58.821977  330651 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh -n functional-753958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cp functional-753958:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2827847343/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh -n functional-753958 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh -n functional-753958 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/263241/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /etc/test/nested/copy/263241/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/263241.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /etc/ssl/certs/263241.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/263241.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /usr/share/ca-certificates/263241.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2632412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /etc/ssl/certs/2632412.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2632412.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /usr/share/ca-certificates/2632412.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "sudo systemctl is-active docker": exit status 1 (282.169869ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "sudo systemctl is-active crio": exit status 1 (288.665313ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-753958 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "316.825493ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.775524ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "350.35307ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.482941ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3541532396/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.830903ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 21:29:52.912005  263241 retry.go:31] will retry after 331.794726ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3541532396/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh "sudo umount -f /mount-9p": exit status 1 (263.78259ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-753958 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3541532396/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-753958 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-753958 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1058117317/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-753958 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-753958
docker.io/kicbase/echo-server:functional-753958
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-753958 image ls --format short --alsologtostderr:
I1202 21:30:12.434905  332879 out.go:360] Setting OutFile to fd 1 ...
I1202 21:30:12.435050  332879 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:12.435062  332879 out.go:374] Setting ErrFile to fd 2...
I1202 21:30:12.435082  332879 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:12.435384  332879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:30:12.436164  332879 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:12.436315  332879 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:12.436875  332879 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:30:12.455035  332879 ssh_runner.go:195] Run: systemctl --version
I1202 21:30:12.455091  332879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:30:12.472480  332879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:30:12.580403  332879 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-753958 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-753958 │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ sha256:667491 │ 8.03MB │
│ localhost/my-image                          │ functional-753958 │ sha256:e669c4 │ 831kB  │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ sha256:e08f4d │ 21.2MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ sha256:2c5f0d │ 21.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ sha256:ccd634 │ 24.7MB │
│ registry.k8s.io/pause                       │ 3.1               │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1            │ sha256:d7b100 │ 265kB  │
│ docker.io/library/minikube-local-cache-test │ functional-753958 │ sha256:6a4d71 │ 991B   │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ sha256:68b5f7 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ sha256:404c2e │ 22.4MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ sha256:163787 │ 15.4MB │
│ registry.k8s.io/pause                       │ 3.3               │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest            │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-753958 image ls --format table --alsologtostderr:
I1202 21:30:16.790265  333272 out.go:360] Setting OutFile to fd 1 ...
I1202 21:30:16.790447  333272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:16.790468  333272 out.go:374] Setting ErrFile to fd 2...
I1202 21:30:16.790496  333272 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:16.790764  333272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:30:16.791426  333272 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:16.791594  333272 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:16.792176  333272 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:30:16.809793  333272 ssh_runner.go:195] Run: systemctl --version
I1202 21:30:16.809849  333272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:30:16.827688  333272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:30:16.936125  333272 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-753958 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-753958"],"size":"991"},{"id":"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8032639"},{"id":"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"24676285"},{"id":"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
],"size":"20658969"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"265458"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-753958"],"size":"2173567"},{"id":"sha256:e669c45f0a6841049e19860eeee9ceafc8f2b35f32efb23b41ead66da5c03690","repoDigests":[],"repoTags":["localhost/my-image:functional-753958"],"size":"830617"},{"id":"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"21166088"},{"id":"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"21134420"},{"id":"sha256:4
04c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"22428165"},{"id":"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"15389290"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-753958 image ls --format json --alsologtostderr:
I1202 21:30:16.562915  333235 out.go:360] Setting OutFile to fd 1 ...
I1202 21:30:16.563065  333235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:16.563092  333235 out.go:374] Setting ErrFile to fd 2...
I1202 21:30:16.563112  333235 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:16.563391  333235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:30:16.564080  333235 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:16.564250  333235 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:16.564840  333235 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:30:16.582833  333235 ssh_runner.go:195] Run: systemctl --version
I1202 21:30:16.582892  333235 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:30:16.600346  333235 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:30:16.704017  333235 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-753958 image ls --format yaml --alsologtostderr:
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8032639"
- id: sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "21166088"
- id: sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "21134420"
- id: sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "22428165"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-753958
size: "2173567"
- id: sha256:6a4d7114f1a3d4d0eb28a4f71082d140e55b9bf3c1bfc1edc182e1a4dd43b4b2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-753958
size: "991"
- id: sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "24676285"
- id: sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "20658969"
- id: sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "15389290"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "265458"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-753958 image ls --format yaml --alsologtostderr:
I1202 21:30:12.671214  332921 out.go:360] Setting OutFile to fd 1 ...
I1202 21:30:12.671354  332921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:12.671366  332921 out.go:374] Setting ErrFile to fd 2...
I1202 21:30:12.671399  332921 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:12.671675  332921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:30:12.672334  332921 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:12.672505  332921 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:12.673109  332921 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:30:12.691901  332921 ssh_runner.go:195] Run: systemctl --version
I1202 21:30:12.691959  332921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:30:12.709776  332921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:30:12.812057  332921 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-753958 ssh pgrep buildkitd: exit status 1 (274.105627ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image build -t localhost/my-image:functional-753958 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-753958 image build -t localhost/my-image:functional-753958 testdata/build --alsologtostderr: (3.145350567s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-753958 image build -t localhost/my-image:functional-753958 testdata/build --alsologtostderr:
I1202 21:30:13.173872  333023 out.go:360] Setting OutFile to fd 1 ...
I1202 21:30:13.173993  333023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:13.174005  333023 out.go:374] Setting ErrFile to fd 2...
I1202 21:30:13.174009  333023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:30:13.174258  333023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:30:13.174879  333023 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:13.175569  333023 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:30:13.176158  333023 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:30:13.194042  333023 ssh_runner.go:195] Run: systemctl --version
I1202 21:30:13.194105  333023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:30:13.211292  333023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:30:13.314482  333023 build_images.go:162] Building image from path: /tmp/build.2449809562.tar
I1202 21:30:13.314550  333023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 21:30:13.324779  333023 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2449809562.tar
I1202 21:30:13.329990  333023 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2449809562.tar: stat -c "%s %y" /var/lib/minikube/build/build.2449809562.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2449809562.tar': No such file or directory
I1202 21:30:13.330022  333023 ssh_runner.go:362] scp /tmp/build.2449809562.tar --> /var/lib/minikube/build/build.2449809562.tar (3072 bytes)
I1202 21:30:13.354219  333023 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2449809562
I1202 21:30:13.362188  333023 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2449809562 -xf /var/lib/minikube/build/build.2449809562.tar
I1202 21:30:13.371624  333023 containerd.go:394] Building image: /var/lib/minikube/build/build.2449809562
I1202 21:30:13.371744  333023 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2449809562 --local dockerfile=/var/lib/minikube/build/build.2449809562 --output type=image,name=localhost/my-image:functional-753958
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:25b7245b67433e7f4c9ba8cce96006596cf2501e4ac6af1dfdc503916f80fb4c 0.0s done
#8 exporting config sha256:e669c45f0a6841049e19860eeee9ceafc8f2b35f32efb23b41ead66da5c03690 0.0s done
#8 naming to localhost/my-image:functional-753958 done
#8 DONE 0.2s
I1202 21:30:16.245795  333023 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2449809562 --local dockerfile=/var/lib/minikube/build/build.2449809562 --output type=image,name=localhost/my-image:functional-753958: (2.874014156s)
I1202 21:30:16.245876  333023 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2449809562
I1202 21:30:16.253882  333023 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2449809562.tar
I1202 21:30:16.261331  333023 build_images.go:218] Built localhost/my-image:functional-753958 from /tmp/build.2449809562.tar
I1202 21:30:16.261359  333023 build_images.go:134] succeeded building to: functional-753958
I1202 21:30:16.261365  333023 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-753958
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image load --daemon kicbase/echo-server:functional-753958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image save kicbase/echo-server:functional-753958 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image rm kicbase/echo-server:functional-753958 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-753958
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 image save --daemon kicbase/echo-server:functional-753958 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-753958 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-753958
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (169s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1202 21:32:36.514172  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.414484  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.420899  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.432279  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.461420  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.502785  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.584221  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:50.745689  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:51.067368  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:51.709321  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:52.990589  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:32:55.552123  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:00.674314  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:10.916558  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:33:31.398043  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:34:12.360323  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m48.120764646s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (169.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (39.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- rollout status deployment/busybox
E1202 21:34:44.122764  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 kubectl -- rollout status deployment/busybox: (36.179011108s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-68fdb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-b9c25 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-zfkc9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-68fdb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-b9c25 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-zfkc9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-68fdb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-b9c25 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-zfkc9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (39.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-68fdb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-68fdb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-b9c25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-b9c25 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-zfkc9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 kubectl -- exec busybox-7b57f96db7-zfkc9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node add --alsologtostderr -v 5
E1202 21:35:34.282167  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 node add --alsologtostderr -v 5: (28.905273325s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5: (1.055491837s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-586025 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.036728805s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 status --output json --alsologtostderr -v 5: (1.065923565s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp testdata/cp-test.txt ha-586025:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3219738292/001/cp-test_ha-586025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025:/home/docker/cp-test.txt ha-586025-m02:/home/docker/cp-test_ha-586025_ha-586025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test_ha-586025_ha-586025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025:/home/docker/cp-test.txt ha-586025-m03:/home/docker/cp-test_ha-586025_ha-586025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test_ha-586025_ha-586025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025:/home/docker/cp-test.txt ha-586025-m04:/home/docker/cp-test_ha-586025_ha-586025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test_ha-586025_ha-586025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp testdata/cp-test.txt ha-586025-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3219738292/001/cp-test_ha-586025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m02:/home/docker/cp-test.txt ha-586025:/home/docker/cp-test_ha-586025-m02_ha-586025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test_ha-586025-m02_ha-586025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m02:/home/docker/cp-test.txt ha-586025-m03:/home/docker/cp-test_ha-586025-m02_ha-586025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test_ha-586025-m02_ha-586025-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m02:/home/docker/cp-test.txt ha-586025-m04:/home/docker/cp-test_ha-586025-m02_ha-586025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test_ha-586025-m02_ha-586025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp testdata/cp-test.txt ha-586025-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3219738292/001/cp-test_ha-586025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m03:/home/docker/cp-test.txt ha-586025:/home/docker/cp-test_ha-586025-m03_ha-586025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test_ha-586025-m03_ha-586025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m03:/home/docker/cp-test.txt ha-586025-m02:/home/docker/cp-test_ha-586025-m03_ha-586025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test_ha-586025-m03_ha-586025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m03:/home/docker/cp-test.txt ha-586025-m04:/home/docker/cp-test_ha-586025-m03_ha-586025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test_ha-586025-m03_ha-586025-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp testdata/cp-test.txt ha-586025-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3219738292/001/cp-test_ha-586025-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m04:/home/docker/cp-test.txt ha-586025:/home/docker/cp-test_ha-586025-m04_ha-586025.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025 "sudo cat /home/docker/cp-test_ha-586025-m04_ha-586025.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m04:/home/docker/cp-test.txt ha-586025-m02:/home/docker/cp-test_ha-586025-m04_ha-586025-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m02 "sudo cat /home/docker/cp-test_ha-586025-m04_ha-586025-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 cp ha-586025-m04:/home/docker/cp-test.txt ha-586025-m03:/home/docker/cp-test_ha-586025-m04_ha-586025-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 ssh -n ha-586025-m03 "sudo cat /home/docker/cp-test_ha-586025-m04_ha-586025-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 node stop m02 --alsologtostderr -v 5: (12.163964488s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5: exit status 7 (776.552748ms)

                                                
                                                
-- stdout --
	ha-586025
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-586025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-586025-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-586025-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:36:28.322226  351001 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:36:28.322344  351001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:36:28.322353  351001 out.go:374] Setting ErrFile to fd 2...
	I1202 21:36:28.322358  351001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:36:28.322601  351001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:36:28.322778  351001 out.go:368] Setting JSON to false
	I1202 21:36:28.322819  351001 mustload.go:66] Loading cluster: ha-586025
	I1202 21:36:28.322893  351001 notify.go:221] Checking for updates...
	I1202 21:36:28.324105  351001 config.go:182] Loaded profile config "ha-586025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:36:28.324135  351001 status.go:174] checking status of ha-586025 ...
	I1202 21:36:28.326232  351001 cli_runner.go:164] Run: docker container inspect ha-586025 --format={{.State.Status}}
	I1202 21:36:28.344855  351001 status.go:371] ha-586025 host status = "Running" (err=<nil>)
	I1202 21:36:28.344877  351001 host.go:66] Checking if "ha-586025" exists ...
	I1202 21:36:28.345188  351001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-586025
	I1202 21:36:28.376218  351001 host.go:66] Checking if "ha-586025" exists ...
	I1202 21:36:28.376526  351001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:36:28.376569  351001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-586025
	I1202 21:36:28.396432  351001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/ha-586025/id_rsa Username:docker}
	I1202 21:36:28.499932  351001 ssh_runner.go:195] Run: systemctl --version
	I1202 21:36:28.507013  351001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:36:28.520245  351001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:36:28.579663  351001 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-02 21:36:28.569634316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:36:28.580311  351001 kubeconfig.go:125] found "ha-586025" server: "https://192.168.49.254:8443"
	I1202 21:36:28.580365  351001 api_server.go:166] Checking apiserver status ...
	I1202 21:36:28.580437  351001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:36:28.595610  351001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I1202 21:36:28.604398  351001 api_server.go:182] apiserver freezer: "9:freezer:/docker/2b61d5e86c46189d700bf7e08856c41ba206686fdd4ad20c2e241b4843b0a873/kubepods/burstable/podfb503ad655453b0d187b81dfefd94af2/e39107208e49378b5153840abfbecdcf7c62770872bed42e67cd220b89096898"
	I1202 21:36:28.604487  351001 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2b61d5e86c46189d700bf7e08856c41ba206686fdd4ad20c2e241b4843b0a873/kubepods/burstable/podfb503ad655453b0d187b81dfefd94af2/e39107208e49378b5153840abfbecdcf7c62770872bed42e67cd220b89096898/freezer.state
	I1202 21:36:28.615876  351001 api_server.go:204] freezer state: "THAWED"
	I1202 21:36:28.615918  351001 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 21:36:28.626200  351001 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 21:36:28.626237  351001 status.go:463] ha-586025 apiserver status = Running (err=<nil>)
	I1202 21:36:28.626249  351001 status.go:176] ha-586025 status: &{Name:ha-586025 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:36:28.626283  351001 status.go:174] checking status of ha-586025-m02 ...
	I1202 21:36:28.626653  351001 cli_runner.go:164] Run: docker container inspect ha-586025-m02 --format={{.State.Status}}
	I1202 21:36:28.644043  351001 status.go:371] ha-586025-m02 host status = "Stopped" (err=<nil>)
	I1202 21:36:28.644063  351001 status.go:384] host is not running, skipping remaining checks
	I1202 21:36:28.644070  351001 status.go:176] ha-586025-m02 status: &{Name:ha-586025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:36:28.644100  351001 status.go:174] checking status of ha-586025-m03 ...
	I1202 21:36:28.644556  351001 cli_runner.go:164] Run: docker container inspect ha-586025-m03 --format={{.State.Status}}
	I1202 21:36:28.663203  351001 status.go:371] ha-586025-m03 host status = "Running" (err=<nil>)
	I1202 21:36:28.663240  351001 host.go:66] Checking if "ha-586025-m03" exists ...
	I1202 21:36:28.663558  351001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-586025-m03
	I1202 21:36:28.681836  351001 host.go:66] Checking if "ha-586025-m03" exists ...
	I1202 21:36:28.682326  351001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:36:28.682415  351001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-586025-m03
	I1202 21:36:28.700542  351001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/ha-586025-m03/id_rsa Username:docker}
	I1202 21:36:28.807284  351001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:36:28.825314  351001 kubeconfig.go:125] found "ha-586025" server: "https://192.168.49.254:8443"
	I1202 21:36:28.825343  351001 api_server.go:166] Checking apiserver status ...
	I1202 21:36:28.825387  351001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:36:28.838980  351001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1321/cgroup
	I1202 21:36:28.848252  351001 api_server.go:182] apiserver freezer: "9:freezer:/docker/34617e45f1877b7a1bcd8fd693fa6896cf99931dfe857744551d4cc0dafce674/kubepods/burstable/pod11b378460f4eb9cd28d937ebd80ba0c2/1fe24928576c3249a26b6809e60fe956c02c7c5ef06665e0a21ea619c8c8839a"
	I1202 21:36:28.848323  351001 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/34617e45f1877b7a1bcd8fd693fa6896cf99931dfe857744551d4cc0dafce674/kubepods/burstable/pod11b378460f4eb9cd28d937ebd80ba0c2/1fe24928576c3249a26b6809e60fe956c02c7c5ef06665e0a21ea619c8c8839a/freezer.state
	I1202 21:36:28.856602  351001 api_server.go:204] freezer state: "THAWED"
	I1202 21:36:28.856640  351001 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 21:36:28.865097  351001 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 21:36:28.865123  351001 status.go:463] ha-586025-m03 apiserver status = Running (err=<nil>)
	I1202 21:36:28.865133  351001 status.go:176] ha-586025-m03 status: &{Name:ha-586025-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:36:28.865150  351001 status.go:174] checking status of ha-586025-m04 ...
	I1202 21:36:28.865469  351001 cli_runner.go:164] Run: docker container inspect ha-586025-m04 --format={{.State.Status}}
	I1202 21:36:28.885177  351001 status.go:371] ha-586025-m04 host status = "Running" (err=<nil>)
	I1202 21:36:28.885204  351001 host.go:66] Checking if "ha-586025-m04" exists ...
	I1202 21:36:28.885511  351001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-586025-m04
	I1202 21:36:28.907968  351001 host.go:66] Checking if "ha-586025-m04" exists ...
	I1202 21:36:28.908282  351001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:36:28.908332  351001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-586025-m04
	I1202 21:36:28.925747  351001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/ha-586025-m04/id_rsa Username:docker}
	I1202 21:36:29.026956  351001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:36:29.040220  351001 status.go:176] ha-586025-m04 status: &{Name:ha-586025-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 node start m02 --alsologtostderr -v 5: (10.938161337s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5: (1.804560703s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.176019756s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 stop --alsologtostderr -v 5: (37.526355674s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 start --wait true --alsologtostderr -v 5
E1202 21:37:36.513933  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:37:47.197794  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:37:50.413860  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:38:18.123816  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 start --wait true --alsologtostderr -v 5: (1m2.109110221s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 node delete m03 --alsologtostderr -v 5: (9.992990351s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 stop --alsologtostderr -v 5: (36.24886979s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5: exit status 7 (111.846136ms)

                                                
                                                
-- stdout --
	ha-586025
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-586025-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-586025-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:39:11.802817  365849 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:39:11.802943  365849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:39:11.802959  365849 out.go:374] Setting ErrFile to fd 2...
	I1202 21:39:11.802965  365849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:39:11.803205  365849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:39:11.803389  365849 out.go:368] Setting JSON to false
	I1202 21:39:11.803429  365849 mustload.go:66] Loading cluster: ha-586025
	I1202 21:39:11.803522  365849 notify.go:221] Checking for updates...
	I1202 21:39:11.803836  365849 config.go:182] Loaded profile config "ha-586025": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:39:11.803847  365849 status.go:174] checking status of ha-586025 ...
	I1202 21:39:11.804343  365849 cli_runner.go:164] Run: docker container inspect ha-586025 --format={{.State.Status}}
	I1202 21:39:11.823460  365849 status.go:371] ha-586025 host status = "Stopped" (err=<nil>)
	I1202 21:39:11.823485  365849 status.go:384] host is not running, skipping remaining checks
	I1202 21:39:11.823492  365849 status.go:176] ha-586025 status: &{Name:ha-586025 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:39:11.823522  365849 status.go:174] checking status of ha-586025-m02 ...
	I1202 21:39:11.823850  365849 cli_runner.go:164] Run: docker container inspect ha-586025-m02 --format={{.State.Status}}
	I1202 21:39:11.841402  365849 status.go:371] ha-586025-m02 host status = "Stopped" (err=<nil>)
	I1202 21:39:11.841426  365849 status.go:384] host is not running, skipping remaining checks
	I1202 21:39:11.841438  365849 status.go:176] ha-586025-m02 status: &{Name:ha-586025-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:39:11.841457  365849 status.go:174] checking status of ha-586025-m04 ...
	I1202 21:39:11.841757  365849 cli_runner.go:164] Run: docker container inspect ha-586025-m04 --format={{.State.Status}}
	I1202 21:39:11.862708  365849 status.go:371] ha-586025-m04 host status = "Stopped" (err=<nil>)
	I1202 21:39:11.862728  365849 status.go:384] host is not running, skipping remaining checks
	I1202 21:39:11.862735  365849 status.go:176] ha-586025-m04 status: &{Name:ha-586025-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1202 21:39:44.122447  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.254316385s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (60.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 node add --control-plane --alsologtostderr -v 5: (59.852974375s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-586025 status --alsologtostderr -v 5: (1.118046081s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (60.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.087345442s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-806338 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1202 21:42:36.513721  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-806338 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m24.946651329s)
--- PASS: TestJSONOutput/start/Command (84.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-806338 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-806338 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-806338 --output=json --user=testUser
E1202 21:42:50.414368  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-806338 --output=json --user=testUser: (5.961455039s)
--- PASS: TestJSONOutput/stop/Command (5.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-400341 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-400341 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.496314ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e87df2ff-b1f2-4ab1-9613-f3195b54eaf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-400341] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13f749d0-6782-4416-8f58-6f7cd205c171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"8e9103ad-923a-4ced-aa42-ffc341856ce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"78f7ddfb-45a3-4c58-8d68-afbda0764196","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig"}}
	{"specversion":"1.0","id":"7d8b6231-13f9-4d1c-840c-fc22a08e65c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube"}}
	{"specversion":"1.0","id":"f61f1406-595c-4c14-9f32-9650f99415f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f026b30a-11d0-4d67-95b4-cc9ec3136f74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99805d3f-93da-4853-b318-5c40f9593d50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-400341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-400341
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-065242 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-065242 --network=: (38.613588461s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-065242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-065242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-065242: (2.169661361s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.81s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-966364 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-966364 --network=bridge: (33.102314904s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-966364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-966364
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-966364: (2.093539297s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.23s)

                                                
                                    
x
+
TestKicExistingNetwork (38.04s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 21:44:14.665747  263241 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 21:44:14.681617  263241 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 21:44:14.681720  263241 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 21:44:14.681737  263241 cli_runner.go:164] Run: docker network inspect existing-network
W1202 21:44:14.697237  263241 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 21:44:14.697268  263241 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 21:44:14.697282  263241 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 21:44:14.697378  263241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 21:44:14.714645  263241 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37045a918311 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fa:0e:6a:1d:5f:aa} reservation:<nil>}
I1202 21:44:14.714919  263241 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b792b0}
I1202 21:44:14.714944  263241 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 21:44:14.714994  263241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 21:44:14.779900  263241 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-930898 --network=existing-network
E1202 21:44:44.123027  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-930898 --network=existing-network: (35.782477783s)
helpers_test.go:175: Cleaning up "existing-network-930898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-930898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-930898: (2.107080634s)
I1202 21:44:52.686913  263241 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.04s)

                                                
                                    
x
+
TestKicCustomSubnet (34.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-300835 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-300835 --subnet=192.168.60.0/24: (32.323596246s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-300835 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-300835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-300835
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-300835: (2.241148884s)
--- PASS: TestKicCustomSubnet (34.59s)

                                                
                                    
x
+
TestKicStaticIP (36.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-228911 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-228911 --static-ip=192.168.200.200: (34.626988647s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-228911 ip
helpers_test.go:175: Cleaning up "static-ip-228911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-228911
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-228911: (2.173098105s)
--- PASS: TestKicStaticIP (36.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-986351 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-986351 --driver=docker  --container-runtime=containerd: (27.865292973s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-988993 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-988993 --driver=docker  --container-runtime=containerd: (37.487632227s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-986351
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-988993
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-988993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-988993
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-988993: (2.081735046s)
helpers_test.go:175: Cleaning up "first-986351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-986351
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-986351: (2.051416132s)
--- PASS: TestMinikubeProfile (70.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-634554 --memory=3072 --mount-string /tmp/TestMountStartserial2249513241/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1202 21:47:19.585862  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-634554 --memory=3072 --mount-string /tmp/TestMountStartserial2249513241/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.564437417s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-634554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-636802 --memory=3072 --mount-string /tmp/TestMountStartserial2249513241/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-636802 --memory=3072 --mount-string /tmp/TestMountStartserial2249513241/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.190143101s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-636802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-634554 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-634554 --alsologtostderr -v=5: (1.707101102s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-636802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-636802
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-636802: (1.283572585s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-636802
E1202 21:47:36.514291  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-636802: (6.780836887s)
--- PASS: TestMountStart/serial/RestartStopped (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-636802 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-344225 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1202 21:47:50.414235  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:49:13.485584  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-344225 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m44.320378716s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-344225 -- rollout status deployment/busybox: (3.956561155s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-nh4fm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-rr5tf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-nh4fm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-rr5tf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-nh4fm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-rr5tf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-nh4fm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-nh4fm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-rr5tf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-344225 -- exec busybox-7b57f96db7-rr5tf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-344225 -v=5 --alsologtostderr
E1202 21:49:44.122411  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-344225 -v=5 --alsologtostderr: (57.375654481s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-344225 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp testdata/cp-test.txt multinode-344225:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620019516/001/cp-test_multinode-344225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225:/home/docker/cp-test.txt multinode-344225-m02:/home/docker/cp-test_multinode-344225_multinode-344225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test_multinode-344225_multinode-344225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225:/home/docker/cp-test.txt multinode-344225-m03:/home/docker/cp-test_multinode-344225_multinode-344225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test_multinode-344225_multinode-344225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp testdata/cp-test.txt multinode-344225-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620019516/001/cp-test_multinode-344225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m02:/home/docker/cp-test.txt multinode-344225:/home/docker/cp-test_multinode-344225-m02_multinode-344225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test_multinode-344225-m02_multinode-344225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m02:/home/docker/cp-test.txt multinode-344225-m03:/home/docker/cp-test_multinode-344225-m02_multinode-344225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test_multinode-344225-m02_multinode-344225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp testdata/cp-test.txt multinode-344225-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile620019516/001/cp-test_multinode-344225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m03:/home/docker/cp-test.txt multinode-344225:/home/docker/cp-test_multinode-344225-m03_multinode-344225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225 "sudo cat /home/docker/cp-test_multinode-344225-m03_multinode-344225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 cp multinode-344225-m03:/home/docker/cp-test.txt multinode-344225-m02:/home/docker/cp-test_multinode-344225-m03_multinode-344225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 ssh -n multinode-344225-m02 "sudo cat /home/docker/cp-test_multinode-344225-m03_multinode-344225-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-344225 node stop m03: (1.3217838s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-344225 status: exit status 7 (545.262417ms)

                                                
                                                
-- stdout --
	multinode-344225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-344225-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-344225-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr: exit status 7 (547.135366ms)

                                                
                                                
-- stdout --
	multinode-344225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-344225-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-344225-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:50:49.036534  418866 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:50:49.036693  418866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:50:49.036723  418866 out.go:374] Setting ErrFile to fd 2...
	I1202 21:50:49.036744  418866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:50:49.037009  418866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:50:49.037221  418866 out.go:368] Setting JSON to false
	I1202 21:50:49.037279  418866 mustload.go:66] Loading cluster: multinode-344225
	I1202 21:50:49.037807  418866 config.go:182] Loaded profile config "multinode-344225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:50:49.037867  418866 status.go:174] checking status of multinode-344225 ...
	I1202 21:50:49.037328  418866 notify.go:221] Checking for updates...
	I1202 21:50:49.039328  418866 cli_runner.go:164] Run: docker container inspect multinode-344225 --format={{.State.Status}}
	I1202 21:50:49.057088  418866 status.go:371] multinode-344225 host status = "Running" (err=<nil>)
	I1202 21:50:49.057109  418866 host.go:66] Checking if "multinode-344225" exists ...
	I1202 21:50:49.057396  418866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-344225
	I1202 21:50:49.090450  418866 host.go:66] Checking if "multinode-344225" exists ...
	I1202 21:50:49.090750  418866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:50:49.090802  418866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-344225
	I1202 21:50:49.113954  418866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33233 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/multinode-344225/id_rsa Username:docker}
	I1202 21:50:49.214903  418866 ssh_runner.go:195] Run: systemctl --version
	I1202 21:50:49.221032  418866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:50:49.241955  418866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 21:50:49.298852  418866 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-02 21:50:49.288500037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 21:50:49.299373  418866 kubeconfig.go:125] found "multinode-344225" server: "https://192.168.67.2:8443"
	I1202 21:50:49.299407  418866 api_server.go:166] Checking apiserver status ...
	I1202 21:50:49.299457  418866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 21:50:49.311927  418866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1318/cgroup
	I1202 21:50:49.319814  418866 api_server.go:182] apiserver freezer: "9:freezer:/docker/0daa417b7e54ac89a941649d6524ac37c0d1f4b4d98bd53ccdedf1b84a4ba9bd/kubepods/burstable/pod72c82b02c5e46663fb72c107e9508219/6bfe3ce32fced66fac7436f40424a0c357352b99186d88b6faa8b824efaad217"
	I1202 21:50:49.319890  418866 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0daa417b7e54ac89a941649d6524ac37c0d1f4b4d98bd53ccdedf1b84a4ba9bd/kubepods/burstable/pod72c82b02c5e46663fb72c107e9508219/6bfe3ce32fced66fac7436f40424a0c357352b99186d88b6faa8b824efaad217/freezer.state
	I1202 21:50:49.328398  418866 api_server.go:204] freezer state: "THAWED"
	I1202 21:50:49.328429  418866 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 21:50:49.337432  418866 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 21:50:49.337468  418866 status.go:463] multinode-344225 apiserver status = Running (err=<nil>)
	I1202 21:50:49.337489  418866 status.go:176] multinode-344225 status: &{Name:multinode-344225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:50:49.337505  418866 status.go:174] checking status of multinode-344225-m02 ...
	I1202 21:50:49.337910  418866 cli_runner.go:164] Run: docker container inspect multinode-344225-m02 --format={{.State.Status}}
	I1202 21:50:49.354354  418866 status.go:371] multinode-344225-m02 host status = "Running" (err=<nil>)
	I1202 21:50:49.354395  418866 host.go:66] Checking if "multinode-344225-m02" exists ...
	I1202 21:50:49.354701  418866 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-344225-m02
	I1202 21:50:49.371302  418866 host.go:66] Checking if "multinode-344225-m02" exists ...
	I1202 21:50:49.371611  418866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 21:50:49.371653  418866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-344225-m02
	I1202 21:50:49.389188  418866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33238 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/multinode-344225-m02/id_rsa Username:docker}
	I1202 21:50:49.491073  418866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 21:50:49.503628  418866 status.go:176] multinode-344225-m02 status: &{Name:multinode-344225-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:50:49.503662  418866 status.go:174] checking status of multinode-344225-m03 ...
	I1202 21:50:49.503971  418866 cli_runner.go:164] Run: docker container inspect multinode-344225-m03 --format={{.State.Status}}
	I1202 21:50:49.520920  418866 status.go:371] multinode-344225-m03 host status = "Stopped" (err=<nil>)
	I1202 21:50:49.520942  418866 status.go:384] host is not running, skipping remaining checks
	I1202 21:50:49.520949  418866 status.go:176] multinode-344225-m03 status: &{Name:multinode-344225-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-344225 node start m03 -v=5 --alsologtostderr: (7.079652752s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-344225
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-344225
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-344225: (25.424433413s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-344225 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-344225 --wait=true -v=5 --alsologtostderr: (53.618073093s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-344225
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-344225 node delete m03: (4.860729341s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 stop
E1202 21:52:36.513739  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-344225 stop: (23.8795551s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-344225 status: exit status 7 (102.925789ms)

                                                
                                                
-- stdout --
	multinode-344225
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-344225-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr: exit status 7 (92.895822ms)

                                                
                                                
-- stdout --
	multinode-344225
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-344225-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 21:52:46.138124  427617 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:52:46.138327  427617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:52:46.138354  427617 out.go:374] Setting ErrFile to fd 2...
	I1202 21:52:46.138373  427617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:52:46.138636  427617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:52:46.138846  427617 out.go:368] Setting JSON to false
	I1202 21:52:46.138906  427617 mustload.go:66] Loading cluster: multinode-344225
	I1202 21:52:46.138981  427617 notify.go:221] Checking for updates...
	I1202 21:52:46.139964  427617 config.go:182] Loaded profile config "multinode-344225": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:52:46.140005  427617 status.go:174] checking status of multinode-344225 ...
	I1202 21:52:46.140567  427617 cli_runner.go:164] Run: docker container inspect multinode-344225 --format={{.State.Status}}
	I1202 21:52:46.159594  427617 status.go:371] multinode-344225 host status = "Stopped" (err=<nil>)
	I1202 21:52:46.159613  427617 status.go:384] host is not running, skipping remaining checks
	I1202 21:52:46.159620  427617 status.go:176] multinode-344225 status: &{Name:multinode-344225 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 21:52:46.159645  427617 status.go:174] checking status of multinode-344225-m02 ...
	I1202 21:52:46.159984  427617 cli_runner.go:164] Run: docker container inspect multinode-344225-m02 --format={{.State.Status}}
	I1202 21:52:46.182043  427617 status.go:371] multinode-344225-m02 host status = "Stopped" (err=<nil>)
	I1202 21:52:46.182062  427617 status.go:384] host is not running, skipping remaining checks
	I1202 21:52:46.182070  427617 status.go:176] multinode-344225-m02 status: &{Name:multinode-344225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-344225 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1202 21:52:50.413752  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-344225 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.145049957s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-344225 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-344225
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-344225-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-344225-m02 --driver=docker  --container-runtime=containerd: exit status 14 (99.860661ms)

                                                
                                                
-- stdout --
	* [multinode-344225-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-344225-m02' is duplicated with machine name 'multinode-344225-m02' in profile 'multinode-344225'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-344225-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-344225-m03 --driver=docker  --container-runtime=containerd: (33.077391018s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-344225
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-344225: exit status 80 (359.117501ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-344225 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-344225-m03 already exists in multinode-344225-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-344225-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-344225-m03: (2.09595229s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                    
x
+
TestPreload (119.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-098261 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
E1202 21:54:27.199426  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:54:44.122425  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-098261 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (1m0.761571012s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-098261 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-098261 image pull gcr.io/k8s-minikube/busybox: (2.267868816s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-098261
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-098261: (5.91436614s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-098261 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-098261 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (47.845824616s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-098261 image list
helpers_test.go:175: Cleaning up "test-preload-098261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-098261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-098261: (2.462205612s)
--- PASS: TestPreload (119.49s)

                                                
                                    
x
+
TestScheduledStopUnix (105.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-080528 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-080528 --memory=3072 --driver=docker  --container-runtime=containerd: (29.465336616s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-080528 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 21:56:46.972508  443485 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:56:46.972657  443485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:46.972668  443485 out.go:374] Setting ErrFile to fd 2...
	I1202 21:56:46.972672  443485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:46.972999  443485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:56:46.973359  443485 out.go:368] Setting JSON to false
	I1202 21:56:46.973513  443485 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:56:46.974135  443485 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:56:46.974229  443485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/config.json ...
	I1202 21:56:46.974460  443485 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:56:46.974605  443485 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-080528 -n scheduled-stop-080528
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-080528 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 21:56:47.434712  443572 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:56:47.434829  443572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:47.434846  443572 out.go:374] Setting ErrFile to fd 2...
	I1202 21:56:47.434851  443572 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:56:47.435098  443572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:56:47.435360  443572 out.go:368] Setting JSON to false
	I1202 21:56:47.435543  443572 daemonize_unix.go:73] killing process 443501 as it is an old scheduled stop
	I1202 21:56:47.435686  443572 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:56:47.436094  443572 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:56:47.436204  443572 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/config.json ...
	I1202 21:56:47.436407  443572 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:56:47.436551  443572 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 21:56:47.449128  263241 retry.go:31] will retry after 113.431µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.450269  263241 retry.go:31] will retry after 202.086µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.451487  263241 retry.go:31] will retry after 141.791µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.452564  263241 retry.go:31] will retry after 264.908µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.453730  263241 retry.go:31] will retry after 521.639µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.454921  263241 retry.go:31] will retry after 1.045698ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.456083  263241 retry.go:31] will retry after 721.148µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.457175  263241 retry.go:31] will retry after 922.114µs: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.458323  263241 retry.go:31] will retry after 1.98598ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.460611  263241 retry.go:31] will retry after 4.435109ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.465881  263241 retry.go:31] will retry after 4.967708ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.471255  263241 retry.go:31] will retry after 10.561848ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.482495  263241 retry.go:31] will retry after 15.177291ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.498696  263241 retry.go:31] will retry after 14.904717ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.514019  263241 retry.go:31] will retry after 24.804052ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
I1202 21:56:47.539285  263241 retry.go:31] will retry after 22.10762ms: open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-080528 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-080528 -n scheduled-stop-080528
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-080528
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-080528 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 21:57:13.382910  444060 out.go:360] Setting OutFile to fd 1 ...
	I1202 21:57:13.383121  444060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:57:13.383150  444060 out.go:374] Setting ErrFile to fd 2...
	I1202 21:57:13.383173  444060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 21:57:13.383800  444060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 21:57:13.384151  444060 out.go:368] Setting JSON to false
	I1202 21:57:13.384304  444060 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:57:13.384735  444060 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
	I1202 21:57:13.384870  444060 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/scheduled-stop-080528/config.json ...
	I1202 21:57:13.385125  444060 mustload.go:66] Loading cluster: scheduled-stop-080528
	I1202 21:57:13.385305  444060 config.go:182] Loaded profile config "scheduled-stop-080528": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1202 21:57:36.513738  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:57:50.414377  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-080528
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-080528: exit status 7 (65.914621ms)

                                                
                                                
-- stdout --
	scheduled-stop-080528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-080528 -n scheduled-stop-080528
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-080528 -n scheduled-stop-080528: exit status 7 (65.130651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-080528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-080528
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-080528: (4.696502579s)
--- PASS: TestScheduledStopUnix (105.77s)

                                                
                                    
x
+
TestInsufficientStorage (12.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-711088 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-711088 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.588857402s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f3435f6-01c4-441f-b66b-cf53a60e7159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-711088] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e7bb48e2-5a65-4482-9be2-c151192f8043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"f11adb7d-2d44-4b8a-a652-a5a2e3c38273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94e31284-039f-45e7-b775-9f14f1148b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig"}}
	{"specversion":"1.0","id":"1f89e9b7-70e2-4c65-9fb9-6e31e2f78f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube"}}
	{"specversion":"1.0","id":"12238a17-304b-4c6a-93ca-4bab95143d5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5bd1c86e-f2c7-44a9-aede-77df304a3532","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6963181-97c9-49ec-8803-b8bcc8180a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a6dd5873-a461-4eab-b1e3-6d196d004d52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"427196d1-7e84-4de6-8359-9ec028441430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5c4e5dc-d55b-4ac4-8eab-5415c43982ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"23429a48-63e0-47e0-8fb1-e6cac201b5e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-711088\" primary control-plane node in \"insufficient-storage-711088\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"91732001-b251-4ea4-9da7-eb42c1eafb66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764169655-21974 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e489bb84-e8df-4024-9a5d-702b4c5fffba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bf6ce9e2-af30-4683-b566-227625b73870","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-711088 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-711088 --output=json --layout=cluster: exit status 7 (301.816993ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-711088","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-711088","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:58:13.093816  445692 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-711088" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-711088 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-711088 --output=json --layout=cluster: exit status 7 (308.563299ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-711088","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-711088","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 21:58:13.404139  445757 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-711088" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
	E1202 21:58:13.414084  445757 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/insufficient-storage-711088/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-711088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-711088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-711088: (1.95226345s)
--- PASS: TestInsufficientStorage (12.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3424665541 start -p running-upgrade-376951 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1202 22:02:36.513619  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:02:50.414301  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3424665541 start -p running-upgrade-376951 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.399102125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-376951 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-376951 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.557750123s)
helpers_test.go:175: Cleaning up "running-upgrade-376951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-376951
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-376951: (2.427269226s)
--- PASS: TestRunningBinaryUpgrade (76.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (175.42s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2626690121 start -p missing-upgrade-164377 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2626690121 start -p missing-upgrade-164377 --memory=3072 --driver=docker  --container-runtime=containerd: (1m14.571130357s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-164377
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-164377: (1.858785502s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-164377
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-164377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1202 21:59:44.122833  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-164377 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m25.223709324s)
helpers_test.go:175: Cleaning up "missing-upgrade-164377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-164377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-164377: (2.130680486s)
--- PASS: TestMissingContainerUpgrade (175.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (94.703497ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-448734] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-448734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-448734 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.983630096s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-448734 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (16.54189736s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-448734 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-448734 status -o json: exit status 2 (386.307614ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-448734","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-448734
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-448734: (2.291031637s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-448734 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.336296853s)
--- PASS: TestNoKubernetes/serial/Start (8.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-448734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-448734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.312579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-448734
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-448734: (1.376476957s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-448734 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-448734 --driver=docker  --container-runtime=containerd: (7.450222558s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-448734 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-448734 "sudo systemctl is-active --quiet service kubelet": exit status 1 (339.643803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (11.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (11.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.245960760 start -p stopped-upgrade-636823 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.245960760 start -p stopped-upgrade-636823 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (31.206246594s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.245960760 -p stopped-upgrade-636823 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.245960760 -p stopped-upgrade-636823 stop: (1.236154946s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-636823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-636823 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.162592318s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-636823
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-636823: (2.508429761s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                    
x
+
TestPause/serial/Start (52.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-942115 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1202 22:03:59.587851  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-942115 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.51625874s)
--- PASS: TestPause/serial/Start (52.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-942115 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-942115 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.499343973s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.52s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-942115 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-942115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-942115 --output=json --layout=cluster: exit status 2 (342.908202ms)

                                                
                                                
-- stdout --
	{"Name":"pause-942115","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-942115","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-942115 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-942115 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-942115 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-942115 --alsologtostderr -v=5: (3.139652957s)
--- PASS: TestPause/serial/DeletePaused (3.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-942115
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-942115: exit status 1 (17.284021ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-942115: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-577910 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-577910 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (210.742302ms)

                                                
                                                
-- stdout --
	* [false-577910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 22:05:21.287651  489676 out.go:360] Setting OutFile to fd 1 ...
	I1202 22:05:21.287777  489676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:05:21.287788  489676 out.go:374] Setting ErrFile to fd 2...
	I1202 22:05:21.287793  489676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 22:05:21.288060  489676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
	I1202 22:05:21.288438  489676 out.go:368] Setting JSON to false
	I1202 22:05:21.290627  489676 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13660,"bootTime":1764699462,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1202 22:05:21.290834  489676 start.go:143] virtualization:  
	I1202 22:05:21.296290  489676 out.go:179] * [false-577910] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1202 22:05:21.299411  489676 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 22:05:21.299486  489676 notify.go:221] Checking for updates...
	I1202 22:05:21.306013  489676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 22:05:21.308885  489676 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
	I1202 22:05:21.311762  489676 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
	I1202 22:05:21.314554  489676 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1202 22:05:21.317337  489676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 22:05:21.320697  489676 config.go:182] Loaded profile config "kubernetes-upgrade-578337": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
	I1202 22:05:21.320816  489676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 22:05:21.345882  489676 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1202 22:05:21.346017  489676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 22:05:21.430838  489676 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-02 22:05:21.421096161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1202 22:05:21.430942  489676 docker.go:319] overlay module found
	I1202 22:05:21.434073  489676 out.go:179] * Using the docker driver based on user configuration
	I1202 22:05:21.436945  489676 start.go:309] selected driver: docker
	I1202 22:05:21.436965  489676 start.go:927] validating driver "docker" against <nil>
	I1202 22:05:21.436980  489676 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 22:05:21.440470  489676 out.go:203] 
	W1202 22:05:21.443306  489676 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1202 22:05:21.446165  489676 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-577910 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 22:00:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-578337
contexts:
- context:
cluster: kubernetes-upgrade-578337
user: kubernetes-upgrade-578337
name: kubernetes-upgrade-578337
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-578337
user:
client-certificate: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.crt
client-key: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-577910

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577910"

                                                
                                                
----------------------- debugLogs end: false-577910 [took: 3.250465481s] --------------------------------
helpers_test.go:175: Cleaning up "false-577910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-577910
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1202 22:11:07.201511  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.414242359s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-996157 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d71228a-eb80-430c-b6bf-b7d92f1d459f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8d71228a-eb80-430c-b6bf-b7d92f1d459f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004400236s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-996157 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-996157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-996157 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.098588249s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-996157 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-996157 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-996157 --alsologtostderr -v=3: (12.188438827s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996157 -n old-k8s-version-996157
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996157 -n old-k8s-version-996157: exit status 7 (86.393707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-996157 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1202 22:12:36.513599  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-996157 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.960033439s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-996157 -n old-k8s-version-996157
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nmgbx" [e20259fe-b40b-4267-9e12-ace2086a8a2d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003672429s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nmgbx" [e20259fe-b40b-4267-9e12-ace2086a8a2d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003557989s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-996157 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-996157 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-996157 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996157 -n old-k8s-version-996157
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996157 -n old-k8s-version-996157: exit status 2 (403.49428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996157 -n old-k8s-version-996157
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996157 -n old-k8s-version-996157: exit status 2 (425.930171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-996157 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-996157 -n old-k8s-version-996157
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-996157 -n old-k8s-version-996157
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (54.589217155s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-716386 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f8147221-905e-4f7c-ad66-b8a7aa665991] Pending
helpers_test.go:352: "busybox" [f8147221-905e-4f7c-ad66-b8a7aa665991] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f8147221-905e-4f7c-ad66-b8a7aa665991] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003186899s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-716386 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-716386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-716386 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-716386 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-716386 --alsologtostderr -v=3: (12.048704425s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-716386 -n embed-certs-716386
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-716386 -n embed-certs-716386: exit status 7 (66.062724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-716386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1202 22:14:44.122890  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-716386 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (58.084297614s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-716386 -n embed-certs-716386
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-crvjs" [aea9c404-330d-4c42-b739-9599b87e8318] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003078915s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-crvjs" [aea9c404-330d-4c42-b739-9599b87e8318] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003420437s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-716386 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-716386 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-716386 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-716386 -n embed-certs-716386
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-716386 -n embed-certs-716386: exit status 2 (331.242267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-716386 -n embed-certs-716386
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-716386 -n embed-certs-716386: exit status 2 (335.543266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-716386 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-716386 -n embed-certs-716386
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-716386 -n embed-certs-716386
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (44.187131638s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-444714 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d055706d-99e0-4e18-a2db-8db9d319b31a] Pending
helpers_test.go:352: "busybox" [d055706d-99e0-4e18-a2db-8db9d319b31a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d055706d-99e0-4e18-a2db-8db9d319b31a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003381757s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-444714 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-444714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-444714 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-444714 --alsologtostderr -v=3
E1202 22:16:28.655072  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.661524  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.672970  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.694484  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.736116  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.817639  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:28.979248  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:29.301013  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:29.943166  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:31.224587  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:33.785923  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-444714 --alsologtostderr -v=3: (12.081841451s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714: exit status 7 (68.556643ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-444714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2
E1202 22:16:38.908095  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:16:49.150379  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:17:09.632381  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/old-k8s-version-996157/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-444714 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.2: (47.970543176s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kmgjx" [d6493a6e-b1fa-4e13-8a15-f3f5c66b97f8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003085419s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kmgjx" [d6493a6e-b1fa-4e13-8a15-f3f5c66b97f8] Running
E1202 22:17:36.514001  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003174242s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-444714 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-444714 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-444714 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714: exit status 2 (334.978203ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714: exit status 2 (354.245687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-444714 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-444714 -n default-k8s-diff-port-444714
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-904303 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-904303 --alsologtostderr -v=3: (1.291788112s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-904303 -n no-preload-904303: exit status 7 (65.743791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-904303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-250247 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-250247 --alsologtostderr -v=3: (1.314181583s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250247 -n newest-cni-250247: exit status 7 (65.317163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-250247 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250247 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m19.821101217s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-577910 "pgrep -a kubelet"
I1202 22:35:55.554544  263241 config.go:182] Loaded profile config "auto-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p76kh" [a14e98af-e7c8-4929-835f-93ef8d159fe0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p76kh" [a14e98af-e7c8-4929-835f-93ef8d159fe0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004310611s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m17.817498705s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mxrb4" [fed1d96e-bbc9-4f13-a110-9720a2930073] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003280988s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-577910 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7gm87" [72c2f1d0-a114-413d-a07c-c20b5c630cfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 22:37:50.414476  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7gm87" [72c2f1d0-a114-413d-a07c-c20b5c630cfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.007255535s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.699981657s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nrd4l" [d6aa46af-31cd-4307-91cd-19cb61fe4c79] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003715023s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-577910 "pgrep -a kubelet"
I1202 22:39:20.900644  263241 config.go:182] Loaded profile config "flannel-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mtkrp" [bdb7146e-8f30-4b18-9e72-76187c960241] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mtkrp" [bdb7146e-8f30-4b18-9e72-76187c960241] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.00416238s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (56.859680121s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-577910 "pgrep -a kubelet"
I1202 22:40:48.417358  263241 config.go:182] Loaded profile config "custom-flannel-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h6kjr" [de0c23ac-1d73-4356-a4bd-75eca2582f76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h6kjr" [de0c23ac-1d73-4356-a4bd-75eca2582f76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003372435s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (44.747430633s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-577910 "pgrep -a kubelet"
I1202 22:42:04.445999  263241 config.go:182] Loaded profile config "enable-default-cni-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-v6tvp" [3c97226c-f6b8-444c-b5f3-f08321bb4ea4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-v6tvp" [3c97226c-f6b8-444c-b5f3-f08321bb4ea4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003747734s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m13.950828501s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1202 22:42:53.699203  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:43:03.941041  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:43:24.422323  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 22:43:39.688687  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/auto-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-577910 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.703619656s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-577910 "pgrep -a kubelet"
I1202 22:43:48.569986  263241 config.go:182] Loaded profile config "bridge-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zgx68" [26de7594-67fe-47b9-89b1-4c702ed42d71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zgx68" [26de7594-67fe-47b9-89b1-4c702ed42d71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004004727s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pkfrz" [04b88614-8ddf-4090-8902-0af9dee75136] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-pkfrz" [04b88614-8ddf-4090-8902-0af9dee75136] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003686134s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-577910 "pgrep -a kubelet"
I1202 22:43:59.201815  263241 config.go:182] Loaded profile config "calico-577910": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-577910 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-27hpd" [23e4990f-e53e-4450-947e-03a5d8a54a31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-27hpd" [23e4990f-e53e-4450-947e-03a5d8a54a31] Running
E1202 22:44:05.384464  263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kindnet-577910/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003496061s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-577910 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-577910 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    

Test skip (37/417)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.16
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
379 TestStartStop/group/disable-driver-mounts 0.15
392 TestNetworkPlugins/group/kubenet 3.48
400 TestNetworkPlugins/group/cilium 3.87
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 20:49:55.122652  263241 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1202 20:49:55.236588  263241 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1202 20:49:55.286902  263241 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-027151 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-027151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-027151
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-122586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-122586
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-577910 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 22:00:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-578337
contexts:
- context:
cluster: kubernetes-upgrade-578337
user: kubernetes-upgrade-578337
name: kubernetes-upgrade-578337
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-578337
user:
client-certificate: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.crt
client-key: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-577910

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577910"

                                                
                                                
----------------------- debugLogs end: kubenet-577910 [took: 3.327335354s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-577910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-577910
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-577910 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-577910" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 22:00:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-578337
contexts:
- context:
cluster: kubernetes-upgrade-578337
user: kubernetes-upgrade-578337
name: kubernetes-upgrade-578337
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-578337
user:
client-certificate: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.crt
client-key: /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/kubernetes-upgrade-578337/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-577910

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-577910" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577910"

                                                
                                                
----------------------- debugLogs end: cilium-577910 [took: 3.707293485s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-577910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-577910
--- SKIP: TestNetworkPlugins/group/cilium (3.87s)

                                                
                                    
Copied to clipboard